* Any restrictions on DMA address boundry?
@ 2003-09-22 15:13 Bret Indrelee
2003-09-22 16:59 ` Matt Porter
0 siblings, 1 reply; 14+ messages in thread
From: Bret Indrelee @ 2003-09-22 15:13 UTC (permalink / raw)
To: Linux PPC Embedded mailing list
Are there alignment restrictions on the address boundry your can
start a DMA with when using the pci_map_single() / pci_unmap_single()
calls?
For some reason, I thought they had to be cache line aligned, but when
I went back through the documentation I couldn't find any such
restriction.
-Bret
--
Bret Indrelee QLogic Corporation
Bret.Indrelee@qlogic.com 6321 Bury Drive, St 13, Eden Prairie, MN 55346
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-22 15:13 Any restrictions on DMA address boundry? Bret Indrelee
@ 2003-09-22 16:59 ` Matt Porter
2003-09-25 17:56 ` Roland Dreier
0 siblings, 1 reply; 14+ messages in thread
From: Matt Porter @ 2003-09-22 16:59 UTC (permalink / raw)
To: Bret Indrelee; +Cc: Linux PPC Embedded mailing list
On Mon, Sep 22, 2003 at 10:13:26AM -0500, Bret Indrelee wrote:
>
> Are there alignment restrictions on the address boundry your can
> start a DMA with when using the pci_map_single() / pci_unmap_single()
> calls?
>
> For some reason, I thought they had to be cache line aligned, but when
> I went back through the documentation I couldn't find any such
> restriction.
cache line alignment is an unwritten requirement that hasn't yet been
addressed in DMA-mapping.txt. Unfortunately, kmalloc() doesn't
guarantee this so it's necessary to overallocate and align.
You can use L1_CACHE_BYTES for this.
Roland Dreier had a patch which added a little helper for this
stuff as well as docs. Unfortunately, it's been dropped on the
floor until somebody with interest and time in non-coherent
platforms picks it up. http://lwn.net/Articles/2482/
-Matt
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-22 16:59 ` Matt Porter
@ 2003-09-25 17:56 ` Roland Dreier
2003-09-25 18:15 ` Bret Indrelee
0 siblings, 1 reply; 14+ messages in thread
From: Roland Dreier @ 2003-09-25 17:56 UTC (permalink / raw)
To: Matt Porter; +Cc: Bret Indrelee, Linux PPC Embedded mailing list
Matt> Roland Dreier had a patch which added a little helper for
Matt> this stuff as well as docs. Unfortunately, it's been
Matt> dropped on the floor until somebody with interest and time
Matt> in non-coherent platforms picks it
Matt> up. http://lwn.net/Articles/2482/
As I recall, the discussion ended with people like David Miller
agreeing that 2.6 would fix this in a better way, but that my patch
was OK for 2.4. However, unaligned DMA buffers only seem to cause
pain on non-mainstream platforms, so there was never much push to have
my patch merged in 2.4. I've never really pushed my patch since we've
fixed all the problems with our own specific embedded PPC 4xx
platform, and no one else seems to care very much.
Best,
Roland
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 17:56 ` Roland Dreier
@ 2003-09-25 18:15 ` Bret Indrelee
2003-09-25 18:26 ` Roland Dreier
2003-09-25 19:56 ` Matt Porter
0 siblings, 2 replies; 14+ messages in thread
From: Bret Indrelee @ 2003-09-25 18:15 UTC (permalink / raw)
To: Roland Dreier; +Cc: Matt Porter, Linux PPC Embedded mailing list
On 25 Sep 2003, Roland Dreier wrote:
> As I recall, the discussion ended with people like David Miller
> agreeing that 2.6 would fix this in a better way, but that my patch
> was OK for 2.4. However, unaligned DMA buffers only seem to cause
> pain on non-mainstream platforms, so there was never much push to have
> my patch merged in 2.4. I've never really pushed my patch since we've
> fixed all the problems with our own specific embedded PPC 4xx
> platform, and no one else seems to care very much.
I've read through the old thread about short DMAs, but still there are
things that aren't clear to me.
What exactly is the issue?
As I understand it, if there is a DMA to the same cache line as something
that the CPU is referencing, you've got a cache problem.
Does it matter what type of transfer the PCI device is doing? If it
always does 32-bit burst memory write transfers instead of memory
write & invalidate does that make a difference?
Right now, I'm interested in the PPC and x86 compatible (Pentium,
Pentium 4, Geode) systems, trying to understand the differences and
requirements of each.
Any insight into the details of the problem would be appreciated, on
list or off.
-Bret
--
Bret Indrelee QLogic Corporation
Bret.Indrelee@qlogic.com 6321 Bury Drive, St 13, Eden Prairie, MN 55346
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 18:15 ` Bret Indrelee
@ 2003-09-25 18:26 ` Roland Dreier
2003-09-25 20:54 ` Eugene Surovegin
2003-09-25 19:56 ` Matt Porter
1 sibling, 1 reply; 14+ messages in thread
From: Roland Dreier @ 2003-09-25 18:26 UTC (permalink / raw)
To: Bret Indrelee; +Cc: Matt Porter, Linux PPC Embedded mailing list
Bret> What exactly is the issue?
Bret> As I understand it, if there is a DMA to the same cache line
Bret> as something that the CPU is referencing, you've got a cache
Bret> problem.
The problem is the following. Suppose for concreteness that we're
working on a system with 32-byte cache lines, and there is no
coherency between PCI and the CPU cache. (This describes the IBM
PowerPC 440GP)
Now suppose you have a 16-byte DMA buffer, and some other unrelated
data in the same cache line. Let's say you want to start a DMA from
an external PCI device into that buffer. The first thing you do is
invalidate the cache line containing the DMA buffer, so that you get
the data the device is about to write and not any data that might
already be in the CPU cache.
You have to do this invalidate before you initiate the DMA, because
otherwise there is the risk of the CPU evicting that cache line and
writing it back to memory after the DMA has occurred, and stomping on
the data the device has written.
Then you tell the device to do the DMA, and go off and do other stuff
while waiting for the DMA to complete. If that other stuff touches
the other 16 bytes of unrelated data that shares the cache line with
the DMA buffer, the CPU will bring it back into the cache. But if you
do this before the DMA happens, you won't see the data the device
writes.
People often come up with complicated schemes like flushing and
invalidating the cache before and after the DMA transfer, but there's
always a scenario where the DMA buffer and/or the unrelated data get
corrupted. The only solution is to make sure that you don't put
unrelated data in the same cache line as a DMA buffer.
Bret> Does it matter what type of transfer the PCI device is
Bret> doing? If it always does 32-bit burst memory write transfers
Bret> instead of memory write & invalidate does that make a
Bret> difference?
No.
- Roland
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 18:15 ` Bret Indrelee
2003-09-25 18:26 ` Roland Dreier
@ 2003-09-25 19:56 ` Matt Porter
2003-09-25 20:20 ` Bret Indrelee
2003-09-25 20:25 ` Eugene Surovegin
1 sibling, 2 replies; 14+ messages in thread
From: Matt Porter @ 2003-09-25 19:56 UTC (permalink / raw)
To: Bret Indrelee; +Cc: Roland Dreier, Matt Porter, Linux PPC Embedded mailing list
On Thu, Sep 25, 2003 at 01:15:15PM -0500, Bret Indrelee wrote:
> I've read through the old thread about short DMAs, but still there are
> things that aren't clear to me.
>
> What exactly is the issue?
The issue is confined to processors which do not support hardware
snooping of the external bus. In this case, management of the
caches is performed in software. Software cache management is
has a granularity of the processor's cache line size. When a
buffer is allocated using the allowed methods (as defined in
DMA-mapping.txt) to obtain memory for use in DMA, there is
no guarantee that the buffer is cacheline aligned. Imagine
one buffer allocated for internal driver management and another
buffer allocated for inbound DMA transactions from a bus
master. They can share the same cacheline. Inbound DMA
read transaction occurs, the usual streaming DMA API call is
made, and the platform-specific implementation invalidates the
entire cacheline. This could corrupt the contents of the
allocated management buffer that is being accessed by the CPU
only.
> As I understand it, if there is a DMA to the same cache line as something
> that the CPU is referencing, you've got a cache problem.
On a certain type of processors, yes.
> Does it matter what type of transfer the PCI device is doing? If it
> always does 32-bit burst memory write transfers instead of memory
> write & invalidate does that make a difference?
>
> Right now, I'm interested in the PPC and x86 compatible (Pentium,
> Pentium 4, Geode) systems, trying to understand the differences and
> requirements of each.
Which PPCs? Classic PPC != PPC8xx != PPC40x != Book E PPC. :)
PPC8xx and PPC4xx require software cache coherency. If you want
your products to work on PPC44x (I hope you do, they are targetted
at markets where qlogic storage controllers are used) ensuring that
your DMA buffers are cacheline aligned at the head/tail is
important.
-Matt
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 19:56 ` Matt Porter
@ 2003-09-25 20:20 ` Bret Indrelee
2003-09-25 20:55 ` Matt Porter
2003-09-25 20:25 ` Eugene Surovegin
1 sibling, 1 reply; 14+ messages in thread
From: Bret Indrelee @ 2003-09-25 20:20 UTC (permalink / raw)
To: Matt Porter; +Cc: Bret Indrelee, Roland Dreier, Linux PPC Embedded mailing list
On Thu, 25 Sep 2003, Matt Porter wrote:
> On Thu, Sep 25, 2003 at 01:15:15PM -0500, Bret Indrelee wrote:
> > I've read through the old thread about short DMAs, but still there are
> > things that aren't clear to me.
> >
> > What exactly is the issue?
>
> The issue is confined to processors which do not support hardware
> snooping of the external bus. In this case, management of the
> caches is performed in software.
It is trying to figure out which systems have these sort of issues
that I'm currently puzzling through. Where the heck should I expect
to find this in the databooks for the various products?
[ snip ]
> > Right now, I'm interested in the PPC and x86 compatible (Pentium,
> > Pentium 4, Geode) systems, trying to understand the differences and
> > requirements of each.
>
> Which PPCs? Classic PPC != PPC8xx != PPC40x != Book E PPC. :)
My immediate concern is 8245 and the Intel/Pentium style processors.
In the near term for PPC, the 8245 and maybe 405E.
> PPC8xx and PPC4xx require software cache coherency. If you want
> your products to work on PPC44x (I hope you do, they are targetted
> at markets where qlogic storage controllers are used) ensuring that
> your DMA buffers are cacheline aligned at the head/tail is
> important.
It looked like on the PPC if I aligned for 32 byes (0x20), that should
handle it for now. This is for embedded, not the HBA controller, so I
shouldn't need to worry about CONFIG_PPC64BRIDGE.
On Pentium, I have to figure out if they do the snooping or not. I
suspect that they do, but finding cache coherency problems is bad
enough that I need a more definitive answer than that.
-Bret
--
Bret Indrelee QLogic Corporation
Bret.Indrelee@qlogic.com 6321 Bury Drive, St 13, Eden Prairie, MN 55346
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 19:56 ` Matt Porter
2003-09-25 20:20 ` Bret Indrelee
@ 2003-09-25 20:25 ` Eugene Surovegin
2003-09-25 20:32 ` Bret Indrelee
2003-09-25 20:41 ` Matt Porter
1 sibling, 2 replies; 14+ messages in thread
From: Eugene Surovegin @ 2003-09-25 20:25 UTC (permalink / raw)
To: Matt Porter
Cc: Bret Indrelee, Roland Dreier, Matt Porter,
Linux PPC Embedded mailing list
At 12:56 PM 9/25/2003, Matt Porter wrote:
>When a buffer is allocated using the allowed methods (as defined in
>DMA-mapping.txt) to obtain memory for use in DMA, there is
>no guarantee that the buffer is cacheline aligned.
Hmm, I don't think this is true.
DMA-mapping.txt explicitly states that pci_alloc_consistent() returns
aligned memory buffer:
" ... The cpu return address and the DMA bus master address are both
guaranteed to be aligned to the smallest PAGE_SIZE order which
is greater than or equal to the requested size. This invariant
exists (for example) to guarantee that if you allocate a chunk
which is smaller than or equal to 64 kilobytes, the extent of the
buffer you receive will not cross a 64K boundary..."
I think it's safe to assume that PAGE_SIZE alignment also guarantees
cacheline alignment for all existing CPUs.
Eugene.
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 20:25 ` Eugene Surovegin
@ 2003-09-25 20:32 ` Bret Indrelee
2003-09-25 20:57 ` Matt Porter
2003-09-25 20:41 ` Matt Porter
1 sibling, 1 reply; 14+ messages in thread
From: Bret Indrelee @ 2003-09-25 20:32 UTC (permalink / raw)
To: Eugene Surovegin; +Cc: Linux PPC Embedded mailing list
On Thu, 25 Sep 2003, Eugene Surovegin wrote:
> At 12:56 PM 9/25/2003, Matt Porter wrote:
> >When a buffer is allocated using the allowed methods (as defined in
> >DMA-mapping.txt) to obtain memory for use in DMA, there is
> >no guarantee that the buffer is cacheline aligned.
>
> Hmm, I don't think this is true.
>
> DMA-mapping.txt explicitly states that pci_alloc_consistent() returns
> aligned memory buffer:
It has been cut from the conversation, but I'm using streaming DMA
mappings. Specifically, pci_map_single() and pci_unmap_single().
My reading of the 2.4 DMA-mapping.txt is that use of pci_alloc_consistent()
is specific to Consistent DMA mappings. Same for the pci_pool_ interface.
-Bret
--
Bret Indrelee QLogic Corporation
Bret.Indrelee@qlogic.com 6321 Bury Drive, St 13, Eden Prairie, MN 55346
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 20:25 ` Eugene Surovegin
2003-09-25 20:32 ` Bret Indrelee
@ 2003-09-25 20:41 ` Matt Porter
1 sibling, 0 replies; 14+ messages in thread
From: Matt Porter @ 2003-09-25 20:41 UTC (permalink / raw)
To: Eugene Surovegin
Cc: Matt Porter, Bret Indrelee, Roland Dreier,
Linux PPC Embedded mailing list
On Thu, Sep 25, 2003 at 01:25:00PM -0700, Eugene Surovegin wrote:
> At 12:56 PM 9/25/2003, Matt Porter wrote:
> >When a buffer is allocated using the allowed methods (as defined in
> >DMA-mapping.txt) to obtain memory for use in DMA, there is
> >no guarantee that the buffer is cacheline aligned.
>
> Hmm, I don't think this is true.
>
> DMA-mapping.txt explicitly states that pci_alloc_consistent() returns
> aligned memory buffer:
>
> " ... The cpu return address and the DMA bus master address are both
> guaranteed to be aligned to the smallest PAGE_SIZE order which
> is greater than or equal to the requested size. This invariant
> exists (for example) to guarantee that if you allocate a chunk
> which is smaller than or equal to 64 kilobytes, the extent of the
> buffer you receive will not cross a 64K boundary..."
>
> I think it's safe to assume that PAGE_SIZE alignment also guarantees
> cacheline alignment for all existing CPUs.
Yes, that's correct. However, I was alluding to kmalloc()'ed
buffers that are to be used with the streaming calls in the
DMA API.
-Matt
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 18:26 ` Roland Dreier
@ 2003-09-25 20:54 ` Eugene Surovegin
0 siblings, 0 replies; 14+ messages in thread
From: Eugene Surovegin @ 2003-09-25 20:54 UTC (permalink / raw)
To: Roland Dreier; +Cc: Bret Indrelee, Matt Porter, Linux PPC Embedded mailing list
At 11:26 AM 9/25/2003, Roland Dreier wrote:
>People often come up with complicated schemes like flushing and
>invalidating the cache before and after the DMA transfer, but there's
>always a scenario where the DMA buffer and/or the unrelated data get
>corrupted. The only solution is to make sure that you don't put
>unrelated data in the same cache line as a DMA buffer.
Well, I'm one of those strange people.
Actually, those schemes are quite trivial and they help in _real_ life.
I fully understand that they are not 100% foolproof, but I think they are
_better_ than doing _nothing_.
I already _wasted_ a lot of my time debugging cache coherency problems.
I cannot audit _all_ drivers/subsystems I use nor I can force their authors
do this.
Eugene.
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 20:20 ` Bret Indrelee
@ 2003-09-25 20:55 ` Matt Porter
2003-09-25 21:27 ` Bret Indrelee
0 siblings, 1 reply; 14+ messages in thread
From: Matt Porter @ 2003-09-25 20:55 UTC (permalink / raw)
To: Bret Indrelee; +Cc: Matt Porter, Roland Dreier, Linux PPC Embedded mailing list
On Thu, Sep 25, 2003 at 03:20:11PM -0500, Bret Indrelee wrote:
> On Thu, 25 Sep 2003, Matt Porter wrote:
> > On Thu, Sep 25, 2003 at 01:15:15PM -0500, Bret Indrelee wrote:
> > > I've read through the old thread about short DMAs, but still there are
> > > things that aren't clear to me.
> > >
> > > What exactly is the issue?
> >
> > The issue is confined to processors which do not support hardware
> > snooping of the external bus. In this case, management of the
> > caches is performed in software.
>
> It is trying to figure out which systems have these sort of issues
> that I'm currently puzzling through. Where the heck should I expect
> to find this in the databooks for the various products?
Usually in the overview of the processor and bus interface chipsets
they will list support for snooping.
> [ snip ]
> > > Right now, I'm interested in the PPC and x86 compatible (Pentium,
> > > Pentium 4, Geode) systems, trying to understand the differences and
> > > requirements of each.
> >
> > Which PPCs? Classic PPC != PPC8xx != PPC40x != Book E PPC. :)
>
> My immediate concern is 8245 and the Intel/Pentium style processors.
>
> In the near term for PPC, the 8245 and maybe 405E.
All 82xx perform hardware snooping to maintain coherence. 405 has
the issue since it is PPC4xx as I mentioned below.
> > PPC8xx and PPC4xx require software cache coherency. If you want
> > your products to work on PPC44x (I hope you do, they are targetted
> > at markets where qlogic storage controllers are used) ensuring that
> > your DMA buffers are cacheline aligned at the head/tail is
> > important.
>
> It looked like on the PPC if I aligned for 32 byes (0x20), that should
> handle it for now. This is for embedded, not the HBA controller, so I
> shouldn't need to worry about CONFIG_PPC64BRIDGE.
True, except for PPC8xx and PPC403GCX that have a 16 byte cacheline.
But for the processors you listed, 32 bytes is fine. However,
you can use the platform dependent define that I referenced
earlier in the thread instead of worrying about a particular
number. Might as well write portable code.
> On Pentium, I have to figure out if they do the snooping or not. I
> suspect that they do, but finding cache coherency problems is bad
> enough that I need a more definitive answer than that.
The processor and chipset manuals should talk about "bus snooping"
which is the tipoff. AFAIK, there is no such thing as a non-coherent
IA32 machine excluding the possibility of a buggy companion chipset
that doesn't implement whatever their snooping protcol is correctly.
Oh, and to throw another wrench in the works...although Classic PPCs
support snooping, some system controllers allow it to be disabled.
e.g. GT64x60. So even PPC7xx/74xx could be dependent on software
managed cache code in some embedded systems. :)
I guess my point is that it might be easier to use this knowledge
to make the code portable so it runs on all processors instead
of concentrating on a couple cases. Aligning head/tail of your
DMAable buffers to L1_CACHE_BYTES is a generic operation. As
Eugene points out, this is not a concern for consistent memory
since it will be page aligned and by definition cacheline aligned.
It's streaming mappings that you need to worry about.
-Matt
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 20:32 ` Bret Indrelee
@ 2003-09-25 20:57 ` Matt Porter
0 siblings, 0 replies; 14+ messages in thread
From: Matt Porter @ 2003-09-25 20:57 UTC (permalink / raw)
To: Bret Indrelee; +Cc: Eugene Surovegin, Linux PPC Embedded mailing list
On Thu, Sep 25, 2003 at 03:32:13PM -0500, Bret Indrelee wrote:
>
> On Thu, 25 Sep 2003, Eugene Surovegin wrote:
> > At 12:56 PM 9/25/2003, Matt Porter wrote:
> > >When a buffer is allocated using the allowed methods (as defined in
> > >DMA-mapping.txt) to obtain memory for use in DMA, there is
> > >no guarantee that the buffer is cacheline aligned.
> >
> > Hmm, I don't think this is true.
> >
> > DMA-mapping.txt explicitly states that pci_alloc_consistent() returns
> > aligned memory buffer:
>
> It has been cut from the conversation, but I'm using streaming DMA
> mappings. Specifically, pci_map_single() and pci_unmap_single().
>
> My reading of the 2.4 DMA-mapping.txt is that use of pci_alloc_consistent()
> is specific to Consistent DMA mappings. Same for the pci_pool_ interface.
This is correct. All of my explanation applied to streaming DMA
mappings.
-Matt
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Any restrictions on DMA address boundry?
2003-09-25 20:55 ` Matt Porter
@ 2003-09-25 21:27 ` Bret Indrelee
0 siblings, 0 replies; 14+ messages in thread
From: Bret Indrelee @ 2003-09-25 21:27 UTC (permalink / raw)
To: Matt Porter, Roland Dreier; +Cc: Linux PPC Embedded mailing list
Just wanted to thank both Matt Porter and Roland Dreier for taking
the time to explain the situation.
I've aligned our buffers to 32 bytes on PPC and am in the process of
confirming the x86 snoop/coherency situation. Aligning it this way
makes the transfer slightly faster anyways, since the 8245 PCI
bridge will cause a disconnect at every 32-byte boundary on a
PCI burst read.
Since the work I'm doing is embedded and we use the same buffer
format between application and driver/kernel space, I'm using
a fixed number rather than L1_CACHE_BYTES. The define for L1_CACHE_BYTES
is only available (as it probably should be) only if you've defined
KERNEL.
-Bret
--
Bret Indrelee QLogic Corporation
Bret.Indrelee@qlogic.com 6321 Bury Drive, St 13, Eden Prairie, MN 55346
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2003-09-25 21:27 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-09-22 15:13 Any restrictions on DMA address boundry? Bret Indrelee
2003-09-22 16:59 ` Matt Porter
2003-09-25 17:56 ` Roland Dreier
2003-09-25 18:15 ` Bret Indrelee
2003-09-25 18:26 ` Roland Dreier
2003-09-25 20:54 ` Eugene Surovegin
2003-09-25 19:56 ` Matt Porter
2003-09-25 20:20 ` Bret Indrelee
2003-09-25 20:55 ` Matt Porter
2003-09-25 21:27 ` Bret Indrelee
2003-09-25 20:25 ` Eugene Surovegin
2003-09-25 20:32 ` Bret Indrelee
2003-09-25 20:57 ` Matt Porter
2003-09-25 20:41 ` Matt Porter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).