* Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?
@ 2002-05-23 23:24 William Jhun
2002-05-24 5:59 ` David S. Miller
0 siblings, 1 reply; 12+ messages in thread
From: William Jhun @ 2002-05-23 23:24 UTC (permalink / raw)
To: linux-kernel
In the explanation of streaming DMA mappings and holding onto a mapping
for multiple DMA operations, the file Documentation/DMA-mapping.txt
(line 507) says the following:
If you need to use the same streaming DMA region multiple times
and touch the data in between the DMA transfers, just map it
with pci_map_{single,sg}, and after each DMA transfer call
either:
pci_dma_sync_single(dev, dma_handle, size, direction);
or:
pci_dma_sync_sg(dev, sglist, nents, direction);
as appropriate.
However, shouldn't pci_dma_sync_*() be called *before* each
PCI_DMA_TODEVICE DMA transfer (after the CPU write, of course) and
*after* each PCI_DMA_FROMDEVICE DMA transfer (before CPU access)? And,
of course, before and after a "bidirectional" DMA, if appropriate.
Thanks,
William
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?
2002-05-23 23:24 Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt? William Jhun
@ 2002-05-24 5:59 ` David S. Miller
2002-05-24 17:43 ` William Jhun
0 siblings, 1 reply; 12+ messages in thread
From: David S. Miller @ 2002-05-24 5:59 UTC (permalink / raw)
To: wjhun; +Cc: linux-kernel
From: William Jhun <wjhun@ayrnetworks.com>
Date: Thu, 23 May 2002 16:24:25 -0700
However, shouldn't pci_dma_sync_*() be called *before* each
PCI_DMA_TODEVICE DMA transfer (after the CPU write, of course) and
*after* each PCI_DMA_FROMDEVICE DMA transfer (before CPU access)? And,
of course, before and after a "bidirectional" DMA, if appropriate.
CPU owns the data before pci_map_{sg,single}(), afterwards device
owns the data. If CPU wants ownership again, it must wait for
device to finish with the data when do a pci_sync_{sg,single}().
You are thinking about CPU cache flushing, and that is a detail
handled transparently to the DMA apis. If you follow the rules
described in the documentation and in my previous paragraph,
the ARCH specific code does the right thing for you.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?
2002-05-24 17:43 ` William Jhun
@ 2002-05-24 17:42 ` David S. Miller
2002-05-24 20:37 ` William Jhun
2002-05-25 3:41 ` [PATCH] Functions to complement pci_dma_sync_{single,sg}(). (was: Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?) William Jhun
0 siblings, 2 replies; 12+ messages in thread
From: David S. Miller @ 2002-05-24 17:42 UTC (permalink / raw)
To: wjhun; +Cc: linux-kernel
From: William Jhun <wjhun@ayrnetworks.com>
Date: Fri, 24 May 2002 10:43:46 -0700
So, if I'm not mistaken, you are saying that I need to call
pci_dma_sync_single() *after* the DMA so that the CPU reclaims ownership
to the buffer? That's fine and probably serves a good purpose on other
architectures, but wouldn't I also need to do one before the DMA (after
the CPU write) operation to flush write buffers/writeback any cachelines
I've modified for non-cache-coherent architectures?
I see what your problem is, the interfaces were designed such
that the CPU could read the data. It did not consider writes.
It was designed to handle a case like a networking driver where
a receive packet is inspected before we decide whether we accept the
packet or just give it back to the card.
Feel free to design the "cpu writes, back to device ownership"
interfaces and submit a patch :-)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?
2002-05-24 5:59 ` David S. Miller
@ 2002-05-24 17:43 ` William Jhun
2002-05-24 17:42 ` David S. Miller
0 siblings, 1 reply; 12+ messages in thread
From: William Jhun @ 2002-05-24 17:43 UTC (permalink / raw)
To: David S. Miller; +Cc: linux-kernel
On Thu, May 23, 2002 at 10:59:27PM -0700, David S. Miller wrote:
> However, shouldn't pci_dma_sync_*() be called *before* each
> PCI_DMA_TODEVICE DMA transfer (after the CPU write, of course) and
> *after* each PCI_DMA_FROMDEVICE DMA transfer (before CPU access)? And,
> of course, before and after a "bidirectional" DMA, if appropriate.
>
> CPU owns the data before pci_map_{sg,single}(), afterwards device
> owns the data. If CPU wants ownership again, it must wait for
> device to finish with the data when do a pci_sync_{sg,single}().
Ok, that's a fine way to think about it. On my architecture, there is no
DMA "mapping" issue, so I'm selfishly thinking about the cache coherency
issue. :o)
> You are thinking about CPU cache flushing, and that is a detail
> handled transparently to the DMA apis. If you follow the rules
> described in the documentation and in my previous paragraph,
> the ARCH specific code does the right thing for you.
So in my case, I have a pool of buffers that are being used to DMA data
to a device. Since I keep these buffers for the life of the driver, I
only want to map at the beginning and unmap at the end. Yet, I have to
change the content of these buffers from the CPU side before DMAing. So,
I do the following process:
1) Allocate the buffers
2) Call pci_map_single() (PCI_DMA_TODEVICE) for all buffers
3) Take a buffer, write into it from CPU
4) Call pci_dma_sync_single() on modified buffer to write back modified
cache lines
5) Initiate DMA
6) Put the buffer back into the pool
7) goto 3)
I'm developing my driver on MIPS, so I don't have any pci "mapping"
issues, bounce buffers or the like. I know how to maintain cache
coherency throughout this process, but I'm not sure how to write it such
that it fits the PCI-DMA mapping API and will work with other
architectures as well. It'd be nice to solve this now and not deal with
it later. :o)
So, if I'm not mistaken, you are saying that I need to call
pci_dma_sync_single() *after* the DMA so that the CPU reclaims ownership
to the buffer? That's fine and probably serves a good purpose on other
architectures, but wouldn't I also need to do one before the DMA (after
the CPU write) operation to flush write buffers/writeback any cachelines
I've modified for non-cache-coherent architectures? Also, the cache
operations are fairly expensive, and performing another
writeback-invalidate (pci_dma_sync_single()) after the DMA doesn't serve
any purpose on my architecture - so I get a performance penalty for the
sake of compatibility. (Unless I #ifndef it out by architecture.
Ughhhh...)
Thanks,
William
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?
2002-05-24 20:37 ` William Jhun
@ 2002-05-24 20:26 ` David S. Miller
2002-05-24 20:58 ` William Jhun
0 siblings, 1 reply; 12+ messages in thread
From: David S. Miller @ 2002-05-24 20:26 UTC (permalink / raw)
To: wjhun; +Cc: linux-kernel
I know what you're trying to do, but I'm going to tell you upfront
that this will make the existing case much more inefficient than
it needs to be.
Please, add a new call to handle your case. Thanks.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?
2002-05-24 17:42 ` David S. Miller
@ 2002-05-24 20:37 ` William Jhun
2002-05-24 20:26 ` David S. Miller
2002-05-25 3:41 ` [PATCH] Functions to complement pci_dma_sync_{single,sg}(). (was: Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?) William Jhun
1 sibling, 1 reply; 12+ messages in thread
From: William Jhun @ 2002-05-24 20:37 UTC (permalink / raw)
To: David S. Miller; +Cc: linux-kernel
On Fri, May 24, 2002 at 10:42:09AM -0700, David S. Miller wrote:
> I see what your problem is, the interfaces were designed such
> that the CPU could read the data. It did not consider writes.
>
> It was designed to handle a case like a networking driver where
> a receive packet is inspected before we decide whether we accept the
> packet or just give it back to the card.
I was thinking about whether we could keep the same API but change the
semantics such that, for a cpu-writes-and-gives-to-driver operation, a
pci_dma_sync_*() called with PCI_DMA_TODEVICE would occur between the
write and the DMA:
1) driver gets buffer from pool and writes into it
2) driver calls pci_dma_sync_single(..., PCI_DMA_TODEVICE) which
in turn copies to a bounce buffer, flushes cache and write
buffers (whichever are relevant per architecture)
3) driver sets up DMA
4) controller DMAs the packet
5) driver acknowledges DMA completion, implicitly "takes back"
buffer and puts into pool
6) goto 1)
The problem concerns the meaning of the driver or controller "owning" a
buffer. Should there be a call between steps 4) and 5) where the driver
"reclaims" the buffer? Yet, by this point, nothing should have changed
in the buffer, so there's no reason to copy from the bounce-buffer or
write-back/invalidate cache lines.
So the question is: After having written to a buffer, called
pci_dma_sync_*(), and DMAed the buffer, is there anything left to do
(for certain architectures) before the driver can re-claim it and begin
writing into the buffer again? If the answer is no, then I propose we
keep the API the way it is and change the semantics such that, for
writing streaming buffers to a driver, pci_dma_sync_*() must be called
after all driver writes have completed and before the DMA occurs, thus
transferring "ownership" to the driver.
But to contradict myself and say how I think the API should change...
The pci_(un)map_*() routines provide a convenient model for maintaining
cache coherency in situations where one maps a buffer, does DMA, and
unmaps it once again. For streaming DMA where a set of buffers stay
mapped, however, using pci_dma_sync_*() to handle two different problems
(providing a mapping that the controller can view and maintaining
cache/write-buffer coherency) is, IMHO, somewhat confusing. Having an
API where separate calls are used for these problems allows the driver
writer to more explicitly say things such as:
[write into buffer]
pci_dma_sync(buffer, TO_DEVICE) // -Does writeback and wbflush
pci_dma_controller_owns(buffer, TO_DEVICE) // -Bounce-buffer copy, etc
[dma to controller]
pci_dma_driver_owns(buffer, TO_DEVICE) // -Prepare for CPU write...
(no need to sync - the buffer couldn't have changed)
I have a more complex example that could affect this design, but I'm
going to sit on it for a short while instead of making this e-mail even
longer. :o)
Thanks,
William
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?
2002-05-24 20:58 ` William Jhun
@ 2002-05-24 20:53 ` David S. Miller
2002-05-24 21:18 ` William Jhun
0 siblings, 1 reply; 12+ messages in thread
From: David S. Miller @ 2002-05-24 20:53 UTC (permalink / raw)
To: wjhun; +Cc: linux-kernel
From: William Jhun <wjhun@ayrnetworks.com>
Date: Fri, 24 May 2002 13:58:42 -0700
Sorry, I'm not clear on this one. I was first proposing (for the short
term, at least) to not change anything at all: all the existing
implementations of pci_dma_sync_*(..., PCIDMA_TO_DEVICE) already do what
is required: prepare the buffer to be DMAed from by the controller. Most
drivers won't have to deal with this; most network drivers, for example,
do a pci_map_*() on an skb passed down from the stack and subsequently
pci_unmap_*() those buffers once transmitted, thus having no need for
pci_dma_sync_*()... So I don't see how this makes anything else less
efficient...
The network drivers use PCI_DMA_FROM_DEVICE because they are working
on receive packets, which get DMA'd 'from' the device to memory.
This is also the same case the SCSI drivers use.
> Please, add a new call to handle your case. Thanks.
Such a call would do what pci_dma_sync_*(..., PCIDMA_TO_DEVICE) already
does (unless that is what you want - to have a new call just for the
sake of clarity...).
A call with PCI_DMA_TO_DEVICE means nearly the same thing
as PCI_DMA_FROM_DEVICE. Namely "revoke PCI ownership of memory"
which means "take the memory out of the PCI domain".
Implementation wise this means:
1) If PCI_DMA_TO_DEVICE, purge any data cached in PCI controller
prefetch caches that require SW flushing.
2) If PCI_DMA_FROM_DEVICE, do the actions in #1 plus if CPU
is not cache coherent flush caches so that PCI written
data is visible to the CPU.
That is what the interface does.
Now that we've established that, you want a new operation.
That operation is "Re-prepare DMA memory so that PCI realm
will see it". And the semantics of this would be to, on
CPUs which are no cache coherent with PCI, to flush the cache
to prevent inconsistencies between PCI and the CPU.
The CPU cache flush is needed in both cases to/from cases.
So do you finally understand why you must create a new interface
to accomplish what you want?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?
2002-05-24 20:26 ` David S. Miller
@ 2002-05-24 20:58 ` William Jhun
2002-05-24 20:53 ` David S. Miller
0 siblings, 1 reply; 12+ messages in thread
From: William Jhun @ 2002-05-24 20:58 UTC (permalink / raw)
To: David S. Miller; +Cc: linux-kernel
On Fri, May 24, 2002 at 01:26:41PM -0700, David S. Miller wrote:
>
> I know what you're trying to do, but I'm going to tell you upfront
> that this will make the existing case much more inefficient than
> it needs to be.
Sorry, I'm not clear on this one. I was first proposing (for the short
term, at least) to not change anything at all: all the existing
implementations of pci_dma_sync_*(..., PCIDMA_TO_DEVICE) already do what
is required: prepare the buffer to be DMAed from by the controller. Most
drivers won't have to deal with this; most network drivers, for example,
do a pci_map_*() on an skb passed down from the stack and subsequently
pci_unmap_*() those buffers once transmitted, thus having no need for
pci_dma_sync_*()... So I don't see how this makes anything else less
efficient...
>
> Please, add a new call to handle your case. Thanks.
Such a call would do what pci_dma_sync_*(..., PCIDMA_TO_DEVICE) already
does (unless that is what you want - to have a new call just for the
sake of clarity...).
Thanks,
William
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?
2002-05-24 20:53 ` David S. Miller
@ 2002-05-24 21:18 ` William Jhun
0 siblings, 0 replies; 12+ messages in thread
From: William Jhun @ 2002-05-24 21:18 UTC (permalink / raw)
To: David S. Miller; +Cc: linux-kernel
On Fri, May 24, 2002 at 01:53:52PM -0700, David S. Miller wrote:
> Now that we've established that, you want a new operation.
> That operation is "Re-prepare DMA memory so that PCI realm
> will see it". And the semantics of this would be to, on
> CPUs which are no cache coherent with PCI, to flush the cache
> to prevent inconsistencies between PCI and the CPU.
>
> The CPU cache flush is needed in both cases to/from cases.
>
> So do you finally understand why you must create a new interface
> to accomplish what you want?
Yes, that clears things up. I'll work on this.
Thanks,
William
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH] Functions to complement pci_dma_sync_{single,sg}(). (was: Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?)
2002-05-24 17:42 ` David S. Miller
2002-05-24 20:37 ` William Jhun
@ 2002-05-25 3:41 ` William Jhun
2002-05-25 23:04 ` [PATCH] Functions to complement pci_dma_sync_{single,sg}() David S. Miller
1 sibling, 1 reply; 12+ messages in thread
From: William Jhun @ 2002-05-25 3:41 UTC (permalink / raw)
To: David S. Miller; +Cc: linux-kernel
On Fri, May 24, 2002 at 10:42:09AM -0700, David S. Miller wrote:
> I see what your problem is, the interfaces were designed such
> that the CPU could read the data. It did not consider writes.
>
> It was designed to handle a case like a networking driver where
> a receive packet is inspected before we decide whether we accept the
> packet or just give it back to the card.
>
> Feel free to design the "cpu writes, back to device ownership"
> interfaces and submit a patch :-)
Here is a patch to demonstrate which calls I'd like added. I'll send a
similar one to linux-mips if this interface looks ok...
This patch adds two functions, pci_dma_prep_single() and
pci_dma_prep_sg(), for a driver to relinquish a buffer to a PCI device
after having gained "ownership" via pci_dma_sync_*(). Essentially,
they should do whatever pci_map_*() does to prepare the buffer for
being accessed by the device, except for the mapping itself (e.g.
flush cache lines, copy to a bounce buffer if direction is
PCIDMA_TO_DEVICE, etc).
This is useful if one wants to use a buffer for making repeated DMA
transfers into a device without having to unmap the buffer. In such a
case, a typical usage would be:
1) Take a buffer that is "owned" by the PCI device
2) pci_dma_sync_single(...., PCIDMA_TO_DEVICE); buffer is
now owned by the driver
3) Write into the buffer
4) pci_dma_prep_single(...., PCIDMA_TO_DEVICE);
5) Perform DMA transfer
6) goto 1)
Thanks,
William
--
Index: include/asm-i386/pci.h
===================================================================
RCS file: /vger/linux/include/asm-i386/pci.h,v
retrieving revision 1.16.2.2
diff -c -r1.16.2.2 pci.h
*** include/asm-i386/pci.h 24 Jan 2002 13:38:58 -0000 1.16.2.2
--- include/asm-i386/pci.h 25 May 2002 03:24:56 -0000
***************
*** 209,214 ****
--- 209,247 ----
flush_write_buffers();
}
+ /*
+ * Prepare buffer for a DMA transfer after driver temporarily
+ * re-claimed it with pci_dma_sync_*().
+ *
+ * In essence, this "returns" the buffer to the PCI device.
+ */
+ static inline void pci_dma_prep_single(struct pci_dev *hwdev,
+ dma_addr_t dma_handle,
+ size_t size, int direction)
+ {
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ flush_write_buffers();
+ }
+
+ /*
+ * Prepare buffer for a DMA transfer after driver temporarily
+ * re-claimed it with pci_dma_sync_*().
+ *
+ * The same as pci_dma_prep_single but for a scatter-gather list,
+ * same rules and usage.
+ */
+ static inline void pci_dma_prep_sg(struct pci_dev *hwdev,
+ struct scatterlist *sg,
+ int nelems, int direction)
+ {
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ flush_write_buffers();
+ }
+
/* Return whether the given PCI device DMA address mask can
* be supported properly. For example, if your device can
* only drive the low 24-bits during PCI bus mastering, then
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] Functions to complement pci_dma_sync_{single,sg}().
2002-05-25 3:41 ` [PATCH] Functions to complement pci_dma_sync_{single,sg}(). (was: Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?) William Jhun
@ 2002-05-25 23:04 ` David S. Miller
2002-05-26 7:09 ` [PATCH] DMA-mapping.txt (was Re: [PATCH] Functions to complement pci_dma_sync_{single,sg}().) William Jhun
0 siblings, 1 reply; 12+ messages in thread
From: David S. Miller @ 2002-05-25 23:04 UTC (permalink / raw)
To: wjhun; +Cc: linux-kernel
Where are the documentation updates? That is the most important part
of the patch.
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH] DMA-mapping.txt (was Re: [PATCH] Functions to complement pci_dma_sync_{single,sg}().)
2002-05-25 23:04 ` [PATCH] Functions to complement pci_dma_sync_{single,sg}() David S. Miller
@ 2002-05-26 7:09 ` William Jhun
0 siblings, 0 replies; 12+ messages in thread
From: William Jhun @ 2002-05-26 7:09 UTC (permalink / raw)
To: David S. Miller; +Cc: linux-kernel
On Sat, May 25, 2002 at 04:04:58PM -0700, David S. Miller wrote:
>
> Where are the documentation updates? That is the most important part
> of the patch.
Indeed. My apologies.
William
---
Index: Documentation/DMA-mapping.txt
===================================================================
RCS file: /vger/linux/Documentation/DMA-mapping.txt,v
retrieving revision 1.17.2.2
diff -u -r1.17.2.2 DMA-mapping.txt
--- Documentation/DMA-mapping.txt 5 Mar 2002 12:40:36 -0000 1.17.2.2
+++ Documentation/DMA-mapping.txt 26 May 2002 07:12:25 -0000
@@ -516,13 +516,35 @@
as appropriate.
+Once you have touched the region (after having called pci_dma_sync_*()),
+and need to give it back to the device for another DMA transfer, call:
+
+ pci_dma_prep_single(dev, dma_handle, size, direction);
+
+or:
+
+ pci_dma_prep_sg(dev, sglist, nents, direction);
+
+This will prepare the buffer for another DMA transfer and synchronize any
+CPU writes to it with the device's view if direction is set to
+PCI_DMA_TODEVICE or PCI_DMA_BIDIRECTIONAL.
+
+Note: Prior to the introduction of pci_dma_prep_*, drivers transferring
+data from a device after a pci_dma_sync_* implicitly returned "ownership"
+of the region to the device without having to expressly call a routine to
+do so. In these cases, data is only read from the buffer, and the CPU
+"view" of the region is synchronized again on the next call to
+pci_dma_sync_*. However, for drivers which may need to make repeated DMA
+transfers to a device from a given region, this call is necessary to make
+any data written by the CPU visible to the device.
+
After the last DMA transfer call one of the DMA unmap routines
pci_unmap_{single,sg}. If you don't touch the data from the first pci_map_*
-call till pci_unmap_*, then you don't have to call the pci_dma_sync_*
-routines at all.
+call till pci_unmap_*, then you don't have to call the pci_dma_sync_* or
+pci_dma_prep_* routines at all.
Here is pseudo code which shows a situation in which you would need
-to use the pci_dma_sync_*() interfaces.
+to use the pci_dma_sync_*() and pci_dma_prep_*() interfaces.
my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
{
@@ -563,7 +585,11 @@
pass_to_upper_layers(cp->rx_buf);
make_and_setup_new_rx_buf(cp);
} else {
- /* Just give the buffer back to the card. */
+ /* Prepare the buffer for another DMA transfer,
+ * then give the buffer back to the card.
+ */
+ pci_dma_prep_single(cp->pdev, cp->rx_dma,
+ cp->rx_len, PCI_DMA_FROMDEVICE);
give_rx_buf_to_card(cp);
}
}
@@ -676,7 +702,15 @@
size_t len, int direction);
This must be done before the CPU looks at the buffer again.
-This interface behaves identically to pci_dma_sync_{single,sg}().
+
+After calling pci_dac_dma_sync_{single,sg}, and before returning the buffer
+to the device for another DMA operation, invoke:
+
+ void pci_dac_dma_prep_single(struct pci_dev *pdev, dma64_addr_t
+ dma_addr, size_t len, int direction);
+
+These two interfaces behave identically to pci_dma_sync_{single,sg}() and
+pci_dma_prep_{single,sg}(), respectively.
If you need to get back to the PAGE/OFFSET tuple from a dma64_addr_t
the following interfaces are provided:
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2002-05-26 7:11 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-05-23 23:24 Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt? William Jhun
2002-05-24 5:59 ` David S. Miller
2002-05-24 17:43 ` William Jhun
2002-05-24 17:42 ` David S. Miller
2002-05-24 20:37 ` William Jhun
2002-05-24 20:26 ` David S. Miller
2002-05-24 20:58 ` William Jhun
2002-05-24 20:53 ` David S. Miller
2002-05-24 21:18 ` William Jhun
2002-05-25 3:41 ` [PATCH] Functions to complement pci_dma_sync_{single,sg}(). (was: Re: Possible discrepancy regarding streaming DMA mappings in DMA-mapping.txt?) William Jhun
2002-05-25 23:04 ` [PATCH] Functions to complement pci_dma_sync_{single,sg}() David S. Miller
2002-05-26 7:09 ` [PATCH] DMA-mapping.txt (was Re: [PATCH] Functions to complement pci_dma_sync_{single,sg}().) William Jhun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox