public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [RFT][PATCH] generic device DMA implementation
@ 2002-12-27 20:21 David Brownell
  2002-12-27 21:40 ` James Bottomley
  2002-12-27 21:47 ` [RFT][PATCH] generic device DMA implementation James Bottomley
  0 siblings, 2 replies; 44+ messages in thread
From: David Brownell @ 2002-12-27 20:21 UTC (permalink / raw)
  To: James Bottomley, linux-kernel

I think you saw that patch to let the new 2.5.53 generic dma code
replace one of the two indirections USB needs.  Here are some of
the key open issues I'm thinking of:

- DMA mapping calls still return no errors; so BUG() out instead?

   Consider systems where DMA-able memory is limited (like SA-1111,
   to 1 MByte); clearly it should be possible for these calls to
   fail, when they can't allocate a bounce buffer.  Or (see below)
   when an invalid argument is provided to a dma mapping call.

   Fix by defining fault returns for the current signatures,
   starting with the api specs:

     * dma_map_sg() returns negative errno (or zero?) when it
       fails.  (Those are illegal sglist lengths.)

     * dma_map_single() returns an arch-specific value, like
       DMA_ADDR_INVALID, when it fails.  (DaveM's suggestion,
       from a while back; it's seemingly arch-specific.)

   Yes, the PCI dma calls would benefit from those bugfixes too.

- Implementation-wise, I'm rather surprised that the generic
   version doesn't just add new bus driver methods rather than
   still insisting everything be PCI underneath.

   It's not clear to me how I'd make, for example, a USB device
   or interface work with dma_map_sg() ... those "generic" calls
   are going to fail (except on x86, where all memory is DMA-able)
   since USB != PCI. Even when usb_buffer_map_sg() would succeed.
   (The second indirection:  the usb controller hardware does the
   mapping, not the device or hcd.  That's usually PCI.)

   Hmm, I suppose there'd need to be a default implementation
   of the mapping operations (for all non-pci busses) that'd
   fail cleanly ... :)

- There's no analogue to pci_pool, and there's nothing like
   "kmalloc" (likely built from N dma-coherent pools).

   That forces drivers to write and maintain memory allocators,
   is a waste of energy as well as being bug-prone.  So in that
   sense this API isn't a complete functional replacement of
   the current PCI (has pools, ~= kmem_cache_t) or USB (with
   simpler buffer_alloc ~= kmalloc) APIs for dma.

- The API says drivers "must" satisfy dma_get_cache_alignment(),
   yet both implementations, asm-{i386,generic}/dma-mapping.h,
   ignore that rule.

   Are you certain of that rule, for all cache coherency models?
   I thought only some machines (with dma-incoherent caches) had
   that as a hard constraint.  (Otherwise it's a soft one:  even
   if there's cacheline contention, the hardware won't lose data
   when drivers use memory barriers correctly.)

   I expect that combination is likely to be problematic, since the
   previous rule has been (wrongly?) that kmalloc or kmem_cache
   memory is fine for DMA mappings, no size restrictions.  Yet for
   one example on x86 dma_get_cache_alignment() returns 128 bytes,
   but kmalloc has several smaller pool sizes ... and lately will
   align to L1_CACHE_BYTES (wasting memory on small allocs?) even
   when that's smaller than L1_CACHE_MAX (in the new dma calls).

   All the more reason to have a drop-in kmalloc alternative for
   dma-aware code to use, handling such details transparently!

- Isn't arch/i386/kernel/pci-dma.c handling DMA masks wrong?
   It's passing GFP_DMA in cases where GFP_HIGHMEM is correct ...

I'm glad to see progress on making DMA more generic, thanks!

- Dave











^ permalink raw reply	[flat|nested] 44+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2002-12-31 22:02 Adam J. Richter
  2002-12-31 22:41 ` Andrew Morton
  2002-12-31 23:35 ` David Brownell
  0 siblings, 2 replies; 44+ messages in thread
From: Adam J. Richter @ 2002-12-31 22:02 UTC (permalink / raw)
  To: david-b, James.Bottomley, linux-kernel

David Brownell wrote:

>struct dma_pool *dma_pool_create(char *, struct device *, size_t)
>void dma_pool_destroy (struct dma_pool *pool)
>void *dma_pool_alloc(struct dma_pool *, int mem_flags, dma_addr_t *)
>void dma_pool_free(struct dma_pool *, void *, dma_addr_t)

	I would like to be able to have failure-free, deadlock-free
blocking memory allocation, such as we have with the non-DMA mempool
library so that we can guarantee that drivers that have been
successfully initialized will continue to work regardless of memory
pressure, and reduce error branches that drivers have to deal with.

	Such a facility could be layered on top of your interface
perhaps by extending the mempool code to pass an extra parameter
around.  If so, then you should think about arranging your interface
so that it could be driven with as little glue as possible by mempool.

	I think that the term "pool" is more descriptively used by
mempool and more misleadningly used by the pci_pool code, as there is
no guaranteed pool being reserved in the pci_pool code.  Alas, I don't
have a good alternative term to suggest at the moment.

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 44+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2002-12-31 23:38 Adam J. Richter
  0 siblings, 0 replies; 44+ messages in thread
From: Adam J. Richter @ 2002-12-31 23:38 UTC (permalink / raw)
  To: akpm; +Cc: david-b, James.Bottomley, linux-kernel

Andrew Morton wrote:
>"Adam J. Richter" wrote:
>> 
>> David Brownell wrote:
>> 
>> >struct dma_pool *dma_pool_create(char *, struct device *, size_t)
>> >void dma_pool_destroy (struct dma_pool *pool)
>> >void *dma_pool_alloc(struct dma_pool *, int mem_flags, dma_addr_t *)
>> >void dma_pool_free(struct dma_pool *, void *, dma_addr_t)
>> 
>>         I would like to be able to have failure-free, deadlock-free
>> blocking memory allocation, such as we have with the non-DMA mempool
>> library so that we can guarantee that drivers that have been
>> successfully initialized will continue to work regardless of memory
>> pressure, and reduce error branches that drivers have to deal with.
>> 
>>         Such a facility could be layered on top of your interface
>> perhaps by extending the mempool code to pass an extra parameter
>> around.  If so, then you should think about arranging your interface
>> so that it could be driven with as little glue as possible by mempool.
>> 

>What is that parameter?  The size, I assume.

	No, dma address.  All allocations in a memory pool (in both
the mempool and pci_pool sense) are the same size.

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 44+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2003-01-01  0:02 Adam J. Richter
  0 siblings, 0 replies; 44+ messages in thread
From: Adam J. Richter @ 2003-01-01  0:02 UTC (permalink / raw)
  To: akpm; +Cc: david-b, James.Bottomley, linux-kernel

Andrew Morton wrote:
>The existing mempool code can be used to implement this, I believe.  The
>pool->alloc callback is passed an opaque void *, and it returns
>a void * which can point at any old composite caller-defined blob.

	That field is the same for all allocations (its value
is stored in the struct mempool), so you can't use it.

	David's example would work because it happens to store a copy
of the DMA address in the structure being allocated, and the mempool
code currently does not overwrite the contents of any chunks of memory
when it holds a freed chunk to give to out later.

	We could generalize David's technique for other data
structures by using a wrapper to store the DMA address and
adopting the convention of having the DMA allocator initially
stuff the DMA address in the beginning of the chunk being
allocated, like so.


/* Set mempool.alloc to this */
void *dma_mempool_alloc_callback(int gfp_mask, void *mempool_data)
{
	struct dma_pool *dma_pool = mempool_data;
	dma_addr_t dma_addr;
	void *result = dma_pool_alloc(mempool_data, gfp_mask, &dma_addr);
	if (result)
		memcpy(result, dma_addr, sizeof(dma_addr_t));
	return result;
}

/* Set mempool.free to this */
void dma_mempool_free_callback(void *element, void *arg)
{
	struct dma_pool *dma_pool = mempool_data;
	dma_pool_free(dma_pool, element, *(dma_addr_t*) element);
}




void *dma_mempool_alloc(mempool_t *pool, int gfp_mask, dma_addr_t *dma_addr)
{
        void *result = mempool_alloc(pool, gfp_mask);
        if (result)
                memcpy(dma_addr, result, sizeof(dma_addr_t));
        return result;
}

void dma_mempool_free(void *element, mempool_t *pool,
		      struct dma_addr_t dma_addr)
{
        /* We rely on mempool_free not trashing the first
           sizeof(dma_addr_t) bytes of element if it is going to
           give this element to another caller rather than freeing it.
           Currently, the mempool code does not know the size of the
           elements, so this is safe to to do, but it would be nice if
           in the future, we could let the mempool code use some of the
           remaining byte to maintain its free list. */
        memcpy(element, &dma_addr, sizeof(dma_addr_t));
        mempool_free(element, pool);
}



	So, I guess there is no need to change the mempool interface,
at least if we guarantee that mempool is not going to overwrite any
freed memory chunks that it is holding in reserve, although this also
means that there will continue to be a small but unnecessary memory
overhead in the mempool allocator.  I guess that could be addressed
later.

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 44+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2003-01-01 19:21 Adam J. Richter
  2003-01-01 19:48 ` James Bottomley
  0 siblings, 1 reply; 44+ messages in thread
From: Adam J. Richter @ 2003-01-01 19:21 UTC (permalink / raw)
  To: akpm, James.Bottomley; +Cc: david-b, linux-kernel

James Bottomley wrote:
> void *
> dma_alloc_coherent(struct device *dev, size_t size,
>-                            dma_addr_t *dma_handle)
>+                            dma_addr_t *dma_handle, int flag)

	I thought Andrew Morton's request for a gfp flag was for
allocating memory from a pool (for example, a "read ahead" will want
to abort if memory is unavailable rather than wait).

	The big DMA allocations, however, will always occur during
initialization, and I think will always have the intermediate policy
of "I can block, but I should fail rather than block for more than a
few seconds and potentially deadlock from loading too many drivers on
a system with very little memory."

	Can someone show me or invent an example of two different uses
of dma_alloc_coherent that really should use different policies on
whether to block or not?

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 44+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2003-01-02  4:13 Adam J. Richter
  2003-01-02 16:41 ` James Bottomley
  2003-01-02 18:26 ` David Brownell
  0 siblings, 2 replies; 44+ messages in thread
From: Adam J. Richter @ 2003-01-02  4:13 UTC (permalink / raw)
  To: david-b; +Cc: akpm, James.Bottomley, linux-kernel

David Brownell wrote:
>James Bottomley wrote:
>> adam@yggdrasil.com said:
>> 
>>>	Can someone show me or invent an example of two different uses of
>>>dma_alloc_coherent that really should use different policies on
>>>whether to block or not? 
>> 
>> 
>> The obvious one is allocations from interrupt routines, which must be 
>> GFP_ATOMIC (ignoring the issue of whether a driver should be doing a memory 
>> allocation in an interrupt).  Allocating large pools at driver initialisation 
>> should probably be GFP_KERNEL as you say.

>More:  not just "from interrupt routines", also "when a spinlock is held".
>Though one expects that if a dma_pool were in use, it'd probably have one
>of that kind of chunk already available (ideally, even initialized).

	Let me clarify or revise my request.  By "show me or invent an
example" I mean describe a case where this would be used, as in
specific hardware devices that Linux has trouble supporting right now,
or specific programs that can't be run efficiently under Linux, etc.
What device would need to do this kind of allocation?  Why haven't I
seen requests for this from people working on real device drivers?
Where is this going to make the kernel smaller, more reliable, faster,
more maintainable, able to make a computer do something it could do
before under Linux, etc.?

	I have trouble understanding why, for example, a USB hard
disk driver would want anything more than a fixed size pool of
transfer descriptors.  At some point you know that you've queued
enough IO so that the driver can be confident that it will be
called again before that queue is completely emptied.


>Many task-initiated allocations would use GFP_KERNEL; not just major driver
>activation points like open().

	Here too, it would help me to understand if you would try
to construct an example.

	Also, your use of the term GFP_KERNEL is potentially
ambiguous.  In some cases GFP_KERNEL seems to mean "wait indifinitely
until memory is available; never fail."  In other cases it means
"perhaps wait a second or two if memory is unavailable, but fail if
memory is not available by then."

	I don't dispute that adding a memory allocation class argument
to dma_alloc_consistent would make it capable of being used in more
cases.  I just don't know if any of those cases actually come up, in
which case the Linux community is probably better off with the smaller
kernel footprint.

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 44+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2003-01-02 17:04 Adam J. Richter
  0 siblings, 0 replies; 44+ messages in thread
From: Adam J. Richter @ 2003-01-02 17:04 UTC (permalink / raw)
  To: James.Bottomley; +Cc: akpm, david-b, linux-kernel

James Bottomley wrote:
>adam@yggdrasil.com said:
>> 	Let me clarify or revise my request.  By "show me or invent an
>> example" I mean describe a case where this would be used, as in
>> specific hardware devices that Linux has trouble supporting right now,
>> or specific programs that can't be run efficiently under Linux, etc.
>> What device would need to do this kind of allocation?  Why haven't I
>> seen requests for this from people working on real device drivers?
>> Where is this going to make the kernel smaller, more reliable, faster,
>> more maintainable, able to make a computer do something it could do
>> before under Linux, etc.?

>I'm not really the right person to be answering this.  For any transfer you 
>set up (which encompasses all of the SCSI stuff bar target mode and AENs) you 
>should have all the resources ready and waiting in the interrupt, and so never 
>require an in_interrupt allocation.

>However, for unsolicited transfer requests---the best example I can think of 
>would be incoming network packets---it does make sense:  You allocate with 
>GFP_ATOMIC, if the kernel can fulfil the request, fine; if not, you drop the 
>packet on the floor.  Now, whether there's an unsolicited transfer that's 
>going to require coherent memory, that I can't say.  It does seem to be 
>possible, though.

	When a network device driver receives a packet, it gives the
network packet that it pre-allocated to the higher layers with
netif_rx() and then allocates a net packet with dev_alloc_skb(), but
that is non-consistent "streaming" memory.  The consistent memory is
general for the DMA gather-scatter stub(s), which is (are) generally
reused since the receive for the packet that arrived has been
completed.

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 44+ messages in thread
* Re: [PATCH] generic device DMA (dma_pool update)
@ 2003-01-02 22:07 Adam J. Richter
  2003-01-03  0:20 ` Russell King
                   ` (3 more replies)
  0 siblings, 4 replies; 44+ messages in thread
From: Adam J. Richter @ 2003-01-02 22:07 UTC (permalink / raw)
  To: david-b; +Cc: akpm, James.Bottomley, linux-kernel

>Note that this "add gfp flags to dma_alloc_coherent()" issue is a tangent
>to the dma_pool topic ... it's a different "generic device DMA" issue.

	I think we understand each other, but let's walk through
the three level of it: mempool_alloc, {dma,pci}_pool_alloc, and
{dma,pci}_alloc_consistent.

	We agree that mempool_alloc should take gfp_flags so that
it can fail rather than block on optional IO such as read-aheads.

	Whether pci_pool_alloc (which does not guarantee a reserve of
memory) should continue to take gfp_flags is not something we were
discussing, but is an interesting question for the future.  I see
that pci_pool_alloc is only called with GFP_ATOMIC in two places:

	uhci_alloc_td in drivers/usb/host/uhci-hcd.c
	alloc_safe_buffer in arch/arm/mach-sa1100/sa1111-pcibuf.c

	The use in uhci_alloc_td means that USB IO can fail under
memory load.  Conceivably, swapping could even deadlock this way.
uhci_alloc_td should be going through a mempool which individual
drivers should expand and shrink as they are added and removed so that
transfers that are sent with GFP_KERNEL will be guaranteed not to fail
for lack of memory (and gfp_flags should be passed down so that it
will be possible to abort transfers like read-aheads rather than
blocking, but that's not essential).

	The pci_pool_alloc  in sa1111-buf.c is more interesting.
alloc_safe_buffer is used to implement an unusual version of
pci_map_single.  It is unclear to me whether this approach is optimal.
I'll look into this more.

	Anyhow, the question that we actually were talking about is
whether dma_alloc_consistent should take gfp_flags.  So far, the only
potential example that I'm aware of is that the question of whether the
call to pci_pool_alloc in sa1111-pcibuf.c needs.to call down to
{pci,dma}_alloc_consistent with GFP_ATOMIC.  It's something that I
just noticed and will look into.


>We already have the pci_pool allocator that knows how to cope (timeout and
>retry) with the awkward semantics of pci_alloc_consistent() ...

	pci_pool_alloc currently implments mempool's GFP_KERNEL
semantics (block indefinitely, never fail), but, unlike mempool, it
does not guarantee a minimum memory allocation, so there is no way
to guarantee that it will ever return.  It can hang if there is
insufficient memory.  It looks like you cannot even control-C out
of it.

	By the way, since you call pci_alloc_consistent's GFP_KERNEL
semantics "awkward", are you saying that you think it should wait
indefinitely for memory to become available?

>likewise,
>we can tell that those semantics are problematic, both because they need
>that kind of workaround and because they complicate reusing "better"
>allocator code (like mm/slab.c) on top of the DMA page allocators.

	Maybe it's irrelevant, but I don't understand what you mean by
"better" here.


>> 	Let me clarify or revise my request.  By "show me or invent an
>> example" I mean describe a case where this would be used, as in
>> specific hardware devices that Linux has trouble supporting right now,
>> or specific programs that can't be run efficiently under Linux, etc.

>That doesn't really strike me as a reasonable revision, since that
>wasn't an issue an improved dma_alloc_coherent() syntax was intended
>to address directly.

	It is always reasonable to ask for an example of where a
requested change would be used, as in specific hardware devices that
Linux has trouble supporting right now, or specific programs that
can't be run efficiently under Linux, or some other real external
benefit, because that is the purpose of software.  This applies to
every byte of code in Linux.

	Don't start by positing some situation in the middle of a
call graph.  Instead, I'm asking you to show an example starting
at the top of the call graph.  What *actual* use will cause this
parameter to be useful in reality?  By what external metric
will users benefit?

	I'll look into the sa1111-pcibuf.c case.  That's the only
potential example that I've seen so far.


>To the extent that it's reasonable, you should also be considering
>this corresponding issue:  ways that removing gfp_flags from the
>corresponding generic memory allocator, __get_free_pages(), would be
>improving those characteristics of Linux.  (IMO, it wouldn't.)

	There are other users of __get_free_pages.  If it turns out
that there is no true use for gfp_mask in dma_alloc_consistent then it
might be worth exploring in the future, but we don't have to decide to
eliminate gfp_mask from __get_free_pages just to decide not to add it
to dma_alloc_consistent.


>> 	I have trouble understanding why, for example, a USB hard
>> disk driver would want anything more than a fixed size pool of
>> transfer descriptors.  At some point you know that you've queued
>> enough IO so that the driver can be confident that it will be
>> called again before that queue is completely emptied.

>For better or worse, that's not how it works today and it's not
>likely to change in the 2.5 kernels.  "Transfer Descriptors" are
>resources inside host controller drivers (bus drivers), allocated
>dynamically (GFP_KERNEL in many cases, normally using a pci_pool)
>when USB device drivers (like usb-storage) submit requests (URBs).

>They are invisible to usb device drivers, except implicitly as
>another reason that submitting an urb might trigger -ENOMEM.

	It might not be trivial to fix that but it would be
straightforward to do so.  In the future, submit_urb(...GFP_KERNEL)
should be guaranteed never to fail with -ENOMEM (see my comments
about uhci_alloc_td above on how to fix this).

	This existing straighforwardly fixable bug does not justify
changing the DMA API just to encrust it further.  Users will be better
off if we fix the bug (it will be able to swap reliably on USB disks,
or at least they'll be closer to it if there are other bugs in the
way).

	In practice, I think that if we just added one, maybe two,
URB's by default for every endpoint when a device is added, that
that would be enough to guarantee that would reduce the number of
drivers that needed to reserve more URB's than that to few or none.

>And for that matter, the usb-storage driver doesn't "fire and
>forget" as you described; that'd be a network driver model.
>The storage driver needs to block till its request completes.

	That only makes it easier to calculate the number of
URB's that need to be guaranteed to be available.

	By the way, as far as I can tell, usb_stor_msg_common in
drivers/usb/storage/transport.c only waits for the urb to be
transferred, not for the operation to complete.


>> 	Also, your use of the term GFP_KERNEL is potentially
>> ambiguous.  In some cases GFP_KERNEL seems to mean "wait indifinitely
>> until memory is available; never fail."  In other cases it means
>> "perhaps wait a second or two if memory is unavailable, but fail if
>> memory is not available by then."

>Hmm, I was unaware that anyone expected GFP_KERNEL (or rather,
>__GFP_WAIT) to guarantee that memory was always returned.  It's
>not called __GFP_NEVERFAIL, after all.

	mempool_alloc does.  That's the point of it.  You calculate
how many objects you need in order to guarantee no deadlocks and
reserve that number in advance (the initial reservation can fail).

>I've always seen such fault returns documented as unrelated to
>allocator parameters ...

	By design, mempool_alloc(GFP_KERNEL) never fails (although
mempool_alloc(GFP_ATOMIC) can).

	By the way, pci_pool_alloc(GFP_KERNEL) also never returns
failure, but I consider that a bug as I explained above.

>and treated callers that break on fault
>returns as buggy, regardless of GFP_* parameters in use.

	It is not a bug for mempool_alloc(GFP_KERNEL) callers to assume
success upon return.  It is a wasete of kernel footprint for them to
check for failure.

>A
>random sample of kernel docs agrees with me on that:  both
>in Documentation/* (like the i2c stuff) or the O'Reilly book
>on device drivers.

	mempool really needs a file in Documentation/.  The only reference
to it there is in linux-2.5.54/Documentation/block/biodoc.txt, line 663 ff:
| This makes use of Ingo Molnar's mempool implementation, which enables
| subsystems like bio to maintain their own reserve memory pools for guaranteed
| deadlock-free allocations during extreme VM load. [...]

Adam J. Richter     __     ______________   575 Oroville Road
adam@yggdrasil.com     \ /                  Milpitas, California 95035
+1 408 309-6081         | g g d r a s i l   United States of America
                         "Free Software For The Rest Of Us."

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2003-01-03  6:44 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-12-27 20:21 [RFT][PATCH] generic device DMA implementation David Brownell
2002-12-27 21:40 ` James Bottomley
2002-12-28  1:29   ` David Brownell
2002-12-28 16:18     ` James Bottomley
2002-12-28 18:16       ` David Brownell
2002-12-28  1:56   ` David Brownell
2002-12-28 16:13     ` James Bottomley
2002-12-28 17:41       ` David Brownell
2002-12-30 23:11     ` [PATCH] generic device DMA (dma_pool update) David Brownell
2002-12-31 15:00       ` James Bottomley
2002-12-31 17:04         ` David Brownell
2002-12-31 17:23           ` James Bottomley
2002-12-31 18:11             ` David Brownell
2002-12-31 18:44               ` James Bottomley
2002-12-31 19:29                 ` David Brownell
2002-12-31 19:50                   ` James Bottomley
2002-12-31 21:17                     ` David Brownell
2002-12-31 16:36       ` James Bottomley
2002-12-31 17:32         ` David Brownell
2002-12-27 21:47 ` [RFT][PATCH] generic device DMA implementation James Bottomley
2002-12-28  2:28   ` David Brownell
  -- strict thread matches above, loose matches on Subject: below --
2002-12-31 22:02 [PATCH] generic device DMA (dma_pool update) Adam J. Richter
2002-12-31 22:41 ` Andrew Morton
2002-12-31 23:23   ` David Brownell
2002-12-31 23:27     ` Andrew Morton
2002-12-31 23:44       ` David Brownell
2002-12-31 23:47     ` James Bottomley
2003-01-01 17:10   ` James Bottomley
2002-12-31 23:35 ` David Brownell
2002-12-31 23:38 Adam J. Richter
2003-01-01  0:02 Adam J. Richter
2003-01-01 19:21 Adam J. Richter
2003-01-01 19:48 ` James Bottomley
2003-01-02  2:11   ` David Brownell
2003-01-02  4:13 Adam J. Richter
2003-01-02 16:41 ` James Bottomley
2003-01-02 18:26 ` David Brownell
2003-01-02 17:04 Adam J. Richter
2003-01-02 22:07 Adam J. Richter
2003-01-03  0:20 ` Russell King
2003-01-03  4:50 ` David Brownell
2003-01-03  6:11 ` David Brownell
2003-01-03  6:46 ` David Brownell
2003-01-03  6:52   ` William Lee Irwin III

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox