* Re: pci_map_single return value
[not found] ` <20040102133454.22daa451.ak@suse.de>
@ 2004-03-19 0:14 ` Anton Blanchard
2004-03-19 0:41 ` David S. Miller
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Anton Blanchard @ 2004-03-19 0:14 UTC (permalink / raw)
To: Andi Kleen; +Cc: davem, linux-arch
> > @@ -210,7 +222,10 @@
> > Returns: the number of physical segments mapped (this may be shorted
> > than <nents> passed in if the block layer determines that some
> > elements of the scatter/gather list are physically adjacent and thus
> > -may be mapped with a single entry).
> > +may be mapped with a single entry).
>
> Another addition here:
>
> Please note that the sg cannot be mapped again if it has been mapped once.
> The mapping process is allowed to destroy information in the sg.
>
> [some drivers do this currently and it causes problems]
>
> > +As with the other mapping interfaces, dma_map_sg can fail. When it
> > +does, 0 is returned and a driver should take appropriate action.
>
> Please make this stronger. A block driver should definitely abort the
> request, otherwise you can kiss your super block good-bye.
Better late than never. Andrew does this look OK to you?
Anton
--
Introduce dma_error() and pci_dma_error() which is used to detect
failures in pci_map_single.
--
===== Documentation/DMA-API.txt 1.5 vs edited =====
--- 1.5/Documentation/DMA-API.txt Thu Feb 5 01:01:19 2004
+++ edited/Documentation/DMA-API.txt Fri Mar 19 11:09:16 2004
@@ -279,6 +279,18 @@
cache width is.
int
+dma_error(dma_addr_t dma_addr)
+
+int
+pci_dma_error(dma_addr_t dma_addr)
+
+In some circumstances dma_map_single and dma_map_page will fail to create
+a mapping. A driver can check for these errors by testing the returned
+dma address with dma_error(). A non zero return value means the mapping
+could not be created and the driver should take appropriate action (eg
+reduce current DMA mapping usage or delay and try again later).
+
+int
dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction)
int
@@ -290,7 +302,16 @@
Returns: the number of physical segments mapped (this may be shorted
than <nents> passed in if the block layer determines that some
elements of the scatter/gather list are physically adjacent and thus
-may be mapped with a single entry).
+may be mapped with a single entry).
+
+Please note that the sg cannot be mapped again if it has been mapped once.
+The mapping process is allowed to destroy information in the sg.
+
+As with the other mapping interfaces, dma_map_sg can fail. When it
+does, 0 is returned and a driver must take appropriate action. It is
+critical that the driver do something, in the case of a block driver
+aborting the request or even oopsing is better than doing nothing and
+corrupting the filesystem.
void
dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
===== Documentation/DMA-mapping.txt 1.18 vs edited =====
--- 1.18/Documentation/DMA-mapping.txt Sun Mar 14 17:54:58 2004
+++ edited/Documentation/DMA-mapping.txt Fri Mar 19 10:55:33 2004
@@ -519,7 +519,7 @@
ends and the second one starts on a page boundary - in fact this is a huge
advantage for cards which either cannot do scatter-gather or have very
limited number of scatter-gather entries) and returns the actual number
-of sg entries it mapped them to.
+of sg entries it mapped them to. On failure 0 is returned.
Then you should loop count times (note: this can be less than nents times)
and use sg_dma_address() and sg_dma_len() macros where you previously
@@ -841,6 +841,27 @@
deleted.
2) More to come...
+
+ Handling Errors
+
+DMA address space is limited on some architectures and an allocation
+failure can be determined by:
+
+- checking if pci_alloc_consistent returns NULL or pci_map_sg returns 0
+
+- checking the returned dma_addr_t of pci_map_single and pci_map_page
+ by using pci_dma_error():
+
+ dma_addr_t dma_handle;
+
+ dma_handle = pci_map_single(dev, addr, size, direction);
+ if (pci_dma_error(dma_handle)) {
+ /*
+ * reduce current DMA mapping usage,
+ * delay and try again later or
+ * reset driver.
+ */
+ }
Closing
===== include/asm-generic/dma-mapping.h 1.5 vs edited =====
--- 1.5/include/asm-generic/dma-mapping.h Sun Mar 14 17:54:58 2004
+++ edited/include/asm-generic/dma-mapping.h Fri Mar 19 10:55:33 2004
@@ -140,6 +140,12 @@
pci_dma_sync_sg_for_device(to_pci_dev(dev), sg, nelems, (int)direction);
}
+static inline int
+dma_error(dma_addr_t dma_addr)
+{
+ return pci_dma_error(dma_addr);
+}
+
/* Now for the API extensions over the pci_ one */
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
===== include/asm-generic/pci-dma-compat.h 1.4 vs edited =====
--- 1.4/include/asm-generic/pci-dma-compat.h Sun Mar 14 17:54:58 2004
+++ edited/include/asm-generic/pci-dma-compat.h Fri Mar 19 10:55:33 2004
@@ -98,4 +98,10 @@
dma_sync_sg_for_device(hwdev == NULL ? NULL : &hwdev->dev, sg, nelems, (enum dma_data_direction)direction);
}
+static inline int
+pci_dma_error(dma_addr_t dma_addr)
+{
+ return dma_error(dma_addr);
+}
+
#endif
===== include/asm-i386/dma-mapping.h 1.3 vs edited =====
--- 1.3/include/asm-i386/dma-mapping.h Sun Mar 14 17:54:58 2004
+++ edited/include/asm-i386/dma-mapping.h Fri Mar 19 10:55:33 2004
@@ -109,6 +109,12 @@
{
flush_write_buffers();
}
+
+static inline int
+dma_error(dma_addr_t dma_addr)
+{
+ return 0;
+}
static inline int
dma_supported(struct device *dev, u64 mask)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: pci_map_single return value
2004-03-19 0:14 ` pci_map_single return value Anton Blanchard
@ 2004-03-19 0:41 ` David S. Miller
2004-03-19 0:55 ` Andrew Morton
2004-03-19 16:57 ` James Bottomley
2 siblings, 0 replies; 10+ messages in thread
From: David S. Miller @ 2004-03-19 0:41 UTC (permalink / raw)
To: Anton Blanchard; +Cc: ak, linux-arch
On Fri, 19 Mar 2004 11:14:48 +1100
Anton Blanchard <anton@samba.org> wrote:
> Better late than never. Andrew does this look OK to you?
...
> Introduce dma_error() and pci_dma_error() which is used to detect
> failures in pci_map_single.
This looks fine to me.
We really need to backport the dma sync stuff and this (once integrated
to 2.6.x) into 2.4.x, probably when 2.4.27-pre1 opens up.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: pci_map_single return value
2004-03-19 0:14 ` pci_map_single return value Anton Blanchard
2004-03-19 0:41 ` David S. Miller
@ 2004-03-19 0:55 ` Andrew Morton
2004-03-19 1:37 ` Anton Blanchard
2004-03-19 16:57 ` James Bottomley
2 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2004-03-19 0:55 UTC (permalink / raw)
To: Anton Blanchard; +Cc: ak, davem, linux-arch
Anton Blanchard <anton@samba.org> wrote:
>
> > Another addition here:
> >
> > Please note that the sg cannot be mapped again if it has been mapped once.
> > The mapping process is allowed to destroy information in the sg.
> >
> > [some drivers do this currently and it causes problems]
> >
> > > +As with the other mapping interfaces, dma_map_sg can fail. When it
> > > +does, 0 is returned and a driver should take appropriate action.
> >
> > Please make this stronger. A block driver should definitely abort the
> > request, otherwise you can kiss your super block good-bye.
>
> Better late than never. Andrew does this look OK to you?
Spose so. Does this mean driver patches are on the way?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: pci_map_single return value
2004-03-19 0:55 ` Andrew Morton
@ 2004-03-19 1:37 ` Anton Blanchard
2004-03-19 14:16 ` James Bottomley
0 siblings, 1 reply; 10+ messages in thread
From: Anton Blanchard @ 2004-03-19 1:37 UTC (permalink / raw)
To: Andrew Morton; +Cc: ak, davem, linux-arch
> Spose so. Does this mean driver patches are on the way?
Thats the plan. The IPR, ibmvscsi and emulex drivers already have hacks
in them to do this in an arch specific way. None of those drivers are
merged yet, although the IPR and ibmvscsi is close.
Anton
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: pci_map_single return value
2004-03-19 1:37 ` Anton Blanchard
@ 2004-03-19 14:16 ` James Bottomley
0 siblings, 0 replies; 10+ messages in thread
From: James Bottomley @ 2004-03-19 14:16 UTC (permalink / raw)
To: Anton Blanchard; +Cc: Andrew Morton, ak, davem, linux-arch
On Thu, 2004-03-18 at 20:37, Anton Blanchard wrote:
> Thats the plan. The IPR, ibmvscsi and emulex drivers already have hacks
> in them to do this in an arch specific way. None of those drivers are
> merged yet, although the IPR and ibmvscsi is close.
I already warned the ibmvscsi person about this. I can hold it up until
it does things properly.
James
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: pci_map_single return value
2004-03-19 0:14 ` pci_map_single return value Anton Blanchard
2004-03-19 0:41 ` David S. Miller
2004-03-19 0:55 ` Andrew Morton
@ 2004-03-19 16:57 ` James Bottomley
2004-03-19 18:55 ` David S. Miller
2 siblings, 1 reply; 10+ messages in thread
From: James Bottomley @ 2004-03-19 16:57 UTC (permalink / raw)
To: Anton Blanchard; +Cc: Andi Kleen, davem, linux-arch
On Thu, 2004-03-18 at 19:14, Anton Blanchard wrote:
> Better late than never. Andrew does this look OK to you?
Actually, there is a fundamental problem with this. Today drivers
assume the system will panic if it runs out of mapping resources, so
very few of them actually check return values.
Unless someone is stepping up to fix *all* the drivers in the kernel,
then we need to cope with this for a while yet.
The way I'd thought of doing this was allowing drivers to advertise
compliance with the error handling API (probably via a flag in struct
device). Those that say they handle errors, we return the error code.
Unmodified drivers wouldn't have the flag, and thus we could still panic
for them before the system gets corrupted...
James
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: pci_map_single return value
2004-03-19 16:57 ` James Bottomley
@ 2004-03-19 18:55 ` David S. Miller
2004-03-19 21:01 ` James Bottomley
0 siblings, 1 reply; 10+ messages in thread
From: David S. Miller @ 2004-03-19 18:55 UTC (permalink / raw)
To: James Bottomley; +Cc: anton, ak, linux-arch
On 19 Mar 2004 11:57:10 -0500
James Bottomley <James.Bottomley@SteelEye.com> wrote:
> Actually, there is a fundamental problem with this. Today drivers
> assume the system will panic if it runs out of mapping resources, so
> very few of them actually check return values.
>
> Unless someone is stepping up to fix *all* the drivers in the kernel,
> then we need to cope with this for a while yet.
I agree with your concerns, but I don't think I'd go that far.
This error return issue has been around far too long, years in fact.
We should make it easier to get this functionality into the tree, not
harder.
If there are extra knobs and stuff people need to do in order to what
effectively amounts to checking for kmalloc() returning NULL, they
are going to be less inclined to do the work.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: pci_map_single return value
2004-03-19 18:55 ` David S. Miller
@ 2004-03-19 21:01 ` James Bottomley
2004-03-19 21:05 ` Andi Kleen
0 siblings, 1 reply; 10+ messages in thread
From: James Bottomley @ 2004-03-19 21:01 UTC (permalink / raw)
To: David S. Miller; +Cc: anton, ak, linux-arch
On Fri, 2004-03-19 at 13:55, David S. Miller wrote:
> I agree with your concerns, but I don't think I'd go that far.
>
> This error return issue has been around far too long, years in fact.
> We should make it easier to get this functionality into the tree, not
> harder.
>
> If there are extra knobs and stuff people need to do in order to what
> effectively amounts to checking for kmalloc() returning NULL, they
> are going to be less inclined to do the work.
I know. My concern stems more from the fact that we have new
architectures that have far fewer IOMMU resources. The primary problem
is x86-64 with only about 256MB of mapping space. However on that arch,
the IOMMU is disabled by default, so it isn't really a problem I
suppose.
James
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: pci_map_single return value
2004-03-19 21:01 ` James Bottomley
@ 2004-03-19 21:05 ` Andi Kleen
2004-03-20 0:13 ` James Bottomley
0 siblings, 1 reply; 10+ messages in thread
From: Andi Kleen @ 2004-03-19 21:05 UTC (permalink / raw)
To: James Bottomley; +Cc: davem, anton, linux-arch
On 19 Mar 2004 16:01:29 -0500
James Bottomley <James.Bottomley@steeleye.com> wrote:
> On Fri, 2004-03-19 at 13:55, David S. Miller wrote:
> > I agree with your concerns, but I don't think I'd go that far.
> >
> > This error return issue has been around far too long, years in fact.
> > We should make it easier to get this functionality into the tree, not
> > harder.
> >
> > If there are extra knobs and stuff people need to do in order to what
> > effectively amounts to checking for kmalloc() returning NULL, they
> > are going to be less inclined to do the work.
>
> I know. My concern stems more from the fact that we have new
> architectures that have far fewer IOMMU resources. The primary problem
> is x86-64 with only about 256MB of mapping space. However on that arch,
Actually 64MB on many systems (128MB AGP aperture and 64MB of that is used
for AGP)
There are kernel and BIOS options to increase it, but they are often not
set. The kernel options cost memory and cannot be enabled by default.
> the IOMMU is disabled by default, so it isn't really a problem I
> suppose.
It is effectively disabled when you have ~3GB of memory, but when you have more
and a 32bit only device it will be enabled for that.
-Andi
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: pci_map_single return value
2004-03-19 21:05 ` Andi Kleen
@ 2004-03-20 0:13 ` James Bottomley
0 siblings, 0 replies; 10+ messages in thread
From: James Bottomley @ 2004-03-20 0:13 UTC (permalink / raw)
To: Andi Kleen; +Cc: davem, anton, linux-arch
On Fri, 2004-03-19 at 16:05, Andi Kleen wrote:
> It is effectively disabled when you have ~3GB of memory, but when you have more
> and a 32bit only device it will be enabled for that.
Well, I suppose it's up to you to object if this error return switch
will cause an unacceptable risk of data corruption. I'm with davem:
both the architectures I care about, voyager (no IOMMU) and parisc (huge
IOMMU space) aren't going to have a problem.
James
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2004-03-20 0:15 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20031212155752.GD17683@krispykreme>
[not found] ` <20031212175104.07dc8444.ak@suse.de>
[not found] ` <20031212192131.GF17683@krispykreme>
[not found] ` <20031213220444.4c526afa.davem@redhat.com>
[not found] ` <20040102120121.GT28023@krispykreme>
[not found] ` <20040102133454.22daa451.ak@suse.de>
2004-03-19 0:14 ` pci_map_single return value Anton Blanchard
2004-03-19 0:41 ` David S. Miller
2004-03-19 0:55 ` Andrew Morton
2004-03-19 1:37 ` Anton Blanchard
2004-03-19 14:16 ` James Bottomley
2004-03-19 16:57 ` James Bottomley
2004-03-19 18:55 ` David S. Miller
2004-03-19 21:01 ` James Bottomley
2004-03-19 21:05 ` Andi Kleen
2004-03-20 0:13 ` James Bottomley
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox