* [PATCH 0/4] add a helper function, is_buffer_dma_capable()
@ 2008-09-09 16:06 FUJITA Tomonori
2008-09-09 16:06 ` [PATCH 1/4] add is_buffer_dma_capable helper function FUJITA Tomonori
2008-09-09 16:27 ` [PATCH 0/4] add a helper function, is_buffer_dma_capable() Joerg Roedel
0 siblings, 2 replies; 10+ messages in thread
From: FUJITA Tomonori @ 2008-09-09 16:06 UTC (permalink / raw)
To: mingo; +Cc: linux-kernel, fujita.tomonori, tony.luck, joerg.roedel
This patchset adds a trivial helper function, is_buffer_dma_capable()
and cleans up some IOMMU implementations. This function is to to see
if a memory region is DMA-capable or not with the arguments, the
dma_mask (or coherent_dma_mask) of a device and the address and size
of a memory region. It's useful for IOMMUs that don't do virtual
mappings at all times.
The patch for swiotlb is a bug fix (the rest are just
cleanups). Currently, swiotlb doesn't take account of the size of a
memory region to see if if the memory region is DMA-capable.
=
arch/x86/kernel/pci-gart_64.c | 16 +++-------------
arch/x86/kernel/pci-nommu.c | 10 ++++++----
include/linux/dma-mapping.h | 5 +++++
lib/swiotlb.c | 15 ++++++++-------
4 files changed, 22 insertions(+), 24 deletions(-)
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/4] add is_buffer_dma_capable helper function
2008-09-09 16:06 [PATCH 0/4] add a helper function, is_buffer_dma_capable() FUJITA Tomonori
@ 2008-09-09 16:06 ` FUJITA Tomonori
2008-09-09 16:06 ` [PATCH 2/4] x86: convert gart to use " FUJITA Tomonori
2008-09-09 16:17 ` [PATCH 1/4] add " Luck, Tony
2008-09-09 16:27 ` [PATCH 0/4] add a helper function, is_buffer_dma_capable() Joerg Roedel
1 sibling, 2 replies; 10+ messages in thread
From: FUJITA Tomonori @ 2008-09-09 16:06 UTC (permalink / raw)
To: mingo; +Cc: linux-kernel, fujita.tomonori, Tony Luck, Joerg Roedel
is_buffer_dma_capable helper function is to see if a memory region is
DMA-capable or not. The arugments are the dma_mask (or
coherent_dma_mask) of a device and the address and size of a memory
region.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Joerg Roedel <joerg.roedel@amd.com>
---
include/linux/dma-mapping.h | 5 +++++
1 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 952e0f8..6ed50c1 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -48,6 +48,11 @@ static inline int is_device_dma_capable(struct device *dev)
return dev->dma_mask != NULL && *dev->dma_mask != DMA_MASK_NONE;
}
+static inline int is_buffer_dma_capable(u64 mask, dma_addr_t addr, size_t size)
+{
+ return addr + size <= mask;
+}
+
#ifdef CONFIG_HAS_DMA
#include <asm/dma-mapping.h>
#else
--
1.5.4.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/4] x86: convert gart to use is_buffer_dma_capable helper function
2008-09-09 16:06 ` [PATCH 1/4] add is_buffer_dma_capable helper function FUJITA Tomonori
@ 2008-09-09 16:06 ` FUJITA Tomonori
2008-09-09 16:06 ` [PATCH 3/4] x86: convert pci-nommu " FUJITA Tomonori
2008-09-09 16:17 ` [PATCH 1/4] add " Luck, Tony
1 sibling, 1 reply; 10+ messages in thread
From: FUJITA Tomonori @ 2008-09-09 16:06 UTC (permalink / raw)
To: mingo; +Cc: linux-kernel, fujita.tomonori, Joerg Roedel
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Joerg Roedel <joerg.roedel@amd.com>
---
arch/x86/kernel/pci-gart_64.c | 16 +++-------------
1 files changed, 3 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/pci-gart_64.c b/arch/x86/kernel/pci-gart_64.c
index 0b99d4a..1b0c412 100644
--- a/arch/x86/kernel/pci-gart_64.c
+++ b/arch/x86/kernel/pci-gart_64.c
@@ -214,24 +214,14 @@ static void iommu_full(struct device *dev, size_t size, int dir)
static inline int
need_iommu(struct device *dev, unsigned long addr, size_t size)
{
- u64 mask = *dev->dma_mask;
- int high = addr + size > mask;
- int mmu = high;
-
- if (force_iommu)
- mmu = 1;
-
- return mmu;
+ return force_iommu ||
+ !is_buffer_dma_capable(*dev->dma_mask, addr, size);
}
static inline int
nonforced_iommu(struct device *dev, unsigned long addr, size_t size)
{
- u64 mask = *dev->dma_mask;
- int high = addr + size > mask;
- int mmu = high;
-
- return mmu;
+ return !is_buffer_dma_capable(*dev->dma_mask, addr, size);
}
/* Map a single continuous physical area into the IOMMU.
--
1.5.4.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/4] x86: convert pci-nommu to use is_buffer_dma_capable helper function
2008-09-09 16:06 ` [PATCH 2/4] x86: convert gart to use " FUJITA Tomonori
@ 2008-09-09 16:06 ` FUJITA Tomonori
2008-09-09 16:06 ` [PATCH 4/4] swiotlb: convert swiotlb " FUJITA Tomonori
0 siblings, 1 reply; 10+ messages in thread
From: FUJITA Tomonori @ 2008-09-09 16:06 UTC (permalink / raw)
To: mingo; +Cc: linux-kernel, fujita.tomonori
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
arch/x86/kernel/pci-nommu.c | 10 ++++++----
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/pci-nommu.c b/arch/x86/kernel/pci-nommu.c
index 8e398b5..1c1c98a 100644
--- a/arch/x86/kernel/pci-nommu.c
+++ b/arch/x86/kernel/pci-nommu.c
@@ -14,7 +14,7 @@
static int
check_addr(char *name, struct device *hwdev, dma_addr_t bus, size_t size)
{
- if (hwdev && bus + size > *hwdev->dma_mask) {
+ if (hwdev && !is_buffer_dma_capable(*hwdev->dma_mask, bus, size)) {
if (*hwdev->dma_mask >= DMA_32BIT_MASK)
printk(KERN_ERR
"nommu_%s: overflow %Lx+%zu of device mask %Lx\n",
@@ -79,6 +79,7 @@ nommu_alloc_coherent(struct device *hwdev, size_t size,
unsigned long dma_mask;
int node;
struct page *page;
+ dma_addr_t addr;
dma_mask = dma_alloc_coherent_mask(hwdev, gfp);
@@ -90,14 +91,15 @@ again:
if (!page)
return NULL;
- if ((page_to_phys(page) + size > dma_mask) && !(gfp & GFP_DMA)) {
+ addr = page_to_phys(page);
+ if (!is_buffer_dma_capable(dma_mask, addr, size) && !(gfp & GFP_DMA)) {
free_pages((unsigned long)page_address(page), get_order(size));
gfp |= GFP_DMA;
goto again;
}
- *dma_addr = page_to_phys(page);
- if (check_addr("alloc_coherent", hwdev, *dma_addr, size)) {
+ if (check_addr("alloc_coherent", hwdev, addr, size)) {
+ *dma_addr = addr;
flush_write_buffers();
return page_address(page);
}
--
1.5.4.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/4] swiotlb: convert swiotlb to use is_buffer_dma_capable helper function
2008-09-09 16:06 ` [PATCH 3/4] x86: convert pci-nommu " FUJITA Tomonori
@ 2008-09-09 16:06 ` FUJITA Tomonori
0 siblings, 0 replies; 10+ messages in thread
From: FUJITA Tomonori @ 2008-09-09 16:06 UTC (permalink / raw)
To: mingo; +Cc: linux-kernel, fujita.tomonori, Tony Luck
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Tony Luck <tony.luck@intel.com>
---
lib/swiotlb.c | 15 ++++++++-------
1 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index b5f5d11..240a67c 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -274,13 +274,13 @@ cleanup1:
}
static int
-address_needs_mapping(struct device *hwdev, dma_addr_t addr)
+address_needs_mapping(struct device *hwdev, dma_addr_t addr, size_t size)
{
dma_addr_t mask = 0xffffffff;
/* If the device has a mask, use it, otherwise default to 32 bits */
if (hwdev && hwdev->dma_mask)
mask = *hwdev->dma_mask;
- return (addr & ~mask) != 0;
+ return !is_buffer_dma_capable(mask, addr, size);
}
static int is_swiotlb_buffer(char *addr)
@@ -473,7 +473,7 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
int order = get_order(size);
ret = (void *)__get_free_pages(flags, order);
- if (ret && address_needs_mapping(hwdev, virt_to_bus(ret))) {
+ if (ret && address_needs_mapping(hwdev, virt_to_bus(ret), size)) {
/*
* The allocated memory isn't reachable by the device.
* Fall back on swiotlb_map_single().
@@ -497,7 +497,7 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dev_addr = virt_to_bus(ret);
/* Confirm address can be DMA'd by device */
- if (address_needs_mapping(hwdev, dev_addr)) {
+ if (address_needs_mapping(hwdev, dev_addr, size)) {
printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016Lx\n",
(unsigned long long)*hwdev->dma_mask,
(unsigned long long)dev_addr);
@@ -561,7 +561,7 @@ swiotlb_map_single_attrs(struct device *hwdev, void *ptr, size_t size,
* we can safely return the device addr and not worry about bounce
* buffering it.
*/
- if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
+ if (!address_needs_mapping(hwdev, dev_addr, size) && !swiotlb_force)
return dev_addr;
/*
@@ -578,7 +578,7 @@ swiotlb_map_single_attrs(struct device *hwdev, void *ptr, size_t size,
/*
* Ensure that the address returned is DMA'ble
*/
- if (address_needs_mapping(hwdev, dev_addr))
+ if (address_needs_mapping(hwdev, dev_addr, size))
panic("map_single: bounce buffer is not DMA'ble");
return dev_addr;
@@ -721,7 +721,8 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
for_each_sg(sgl, sg, nelems, i) {
addr = SG_ENT_VIRT_ADDRESS(sg);
dev_addr = virt_to_bus(addr);
- if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
+ if (swiotlb_force ||
+ address_needs_mapping(hwdev, dev_addr, sg->length)) {
void *map = map_single(hwdev, addr, sg->length, dir);
if (!map) {
/* Don't panic here, we expect map_sg users
--
1.5.4.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* RE: [PATCH 1/4] add is_buffer_dma_capable helper function
2008-09-09 16:06 ` [PATCH 1/4] add is_buffer_dma_capable helper function FUJITA Tomonori
2008-09-09 16:06 ` [PATCH 2/4] x86: convert gart to use " FUJITA Tomonori
@ 2008-09-09 16:17 ` Luck, Tony
2008-09-09 16:21 ` Joerg Roedel
2008-09-09 16:49 ` FUJITA Tomonori
1 sibling, 2 replies; 10+ messages in thread
From: Luck, Tony @ 2008-09-09 16:17 UTC (permalink / raw)
To: FUJITA Tomonori, mingo@redhat.com
Cc: linux-kernel@vger.kernel.org, Joerg Roedel
+static inline int is_buffer_dma_capable(u64 mask, dma_addr_t addr, size_t size)
+{
+ return addr + size <= mask;
+}
Do we care about wrap-around (e.g. addr=0xffffffffffffffff size=2)?
-Tony
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/4] add is_buffer_dma_capable helper function
2008-09-09 16:17 ` [PATCH 1/4] add " Luck, Tony
@ 2008-09-09 16:21 ` Joerg Roedel
2008-09-09 16:49 ` FUJITA Tomonori
1 sibling, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2008-09-09 16:21 UTC (permalink / raw)
To: Luck, Tony
Cc: FUJITA Tomonori, mingo@redhat.com, linux-kernel@vger.kernel.org
On Tue, Sep 09, 2008 at 09:17:05AM -0700, Luck, Tony wrote:
> +static inline int is_buffer_dma_capable(u64 mask, dma_addr_t addr, size_t size)
> +{
> + return addr + size <= mask;
> +}
>
> Do we care about wrap-around (e.g. addr=0xffffffffffffffff size=2)?
I don't think so. The current code does not care about it and if it
happens its a bug in the iommu implementation.
Joerg
--
| AMD Saxony Limited Liability Company & Co. KG
Operating | Wilschdorfer Landstr. 101, 01109 Dresden, Germany
System | Register Court Dresden: HRA 4896
Research | General Partner authorized to represent:
Center | AMD Saxony LLC (Wilmington, Delaware, US)
| General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] add a helper function, is_buffer_dma_capable()
2008-09-09 16:06 [PATCH 0/4] add a helper function, is_buffer_dma_capable() FUJITA Tomonori
2008-09-09 16:06 ` [PATCH 1/4] add is_buffer_dma_capable helper function FUJITA Tomonori
@ 2008-09-09 16:27 ` Joerg Roedel
2008-09-10 9:34 ` Ingo Molnar
1 sibling, 1 reply; 10+ messages in thread
From: Joerg Roedel @ 2008-09-09 16:27 UTC (permalink / raw)
To: FUJITA Tomonori; +Cc: mingo, linux-kernel, tony.luck
On Wed, Sep 10, 2008 at 01:06:45AM +0900, FUJITA Tomonori wrote:
> This patchset adds a trivial helper function, is_buffer_dma_capable()
> and cleans up some IOMMU implementations. This function is to to see
> if a memory region is DMA-capable or not with the arguments, the
> dma_mask (or coherent_dma_mask) of a device and the address and size
> of a memory region. It's useful for IOMMUs that don't do virtual
> mappings at all times.
>
> The patch for swiotlb is a bug fix (the rest are just
> cleanups). Currently, swiotlb doesn't take account of the size of a
> memory region to see if if the memory region is DMA-capable.
Good idea. This patchset increases the readability of the software
dma-api implementations.
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
--
| AMD Saxony Limited Liability Company & Co. KG
Operating | Wilschdorfer Landstr. 101, 01109 Dresden, Germany
System | Register Court Dresden: HRA 4896
Research | General Partner authorized to represent:
Center | AMD Saxony LLC (Wilmington, Delaware, US)
| General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCH 1/4] add is_buffer_dma_capable helper function
2008-09-09 16:17 ` [PATCH 1/4] add " Luck, Tony
2008-09-09 16:21 ` Joerg Roedel
@ 2008-09-09 16:49 ` FUJITA Tomonori
1 sibling, 0 replies; 10+ messages in thread
From: FUJITA Tomonori @ 2008-09-09 16:49 UTC (permalink / raw)
To: tony.luck; +Cc: fujita.tomonori, mingo, linux-kernel, joerg.roedel
On Tue, 9 Sep 2008 09:17:05 -0700
"Luck, Tony" <tony.luck@intel.com> wrote:
> +static inline int is_buffer_dma_capable(u64 mask, dma_addr_t addr, size_t size)
> +{
> + return addr + size <= mask;
> +}
>
> Do we care about wrap-around (e.g. addr=0xffffffffffffffff size=2)?
We could do something like:
return addr < mask && addr + size <= mask;
But such wrap-around can happen only with a buggy IOMMU
implementation. A driver will does DMA from 0xffffffffffffffff to
0x1. Even if this function does the right thing in such case, we will
be dead?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] add a helper function, is_buffer_dma_capable()
2008-09-09 16:27 ` [PATCH 0/4] add a helper function, is_buffer_dma_capable() Joerg Roedel
@ 2008-09-10 9:34 ` Ingo Molnar
0 siblings, 0 replies; 10+ messages in thread
From: Ingo Molnar @ 2008-09-10 9:34 UTC (permalink / raw)
To: Joerg Roedel; +Cc: FUJITA Tomonori, mingo, linux-kernel, tony.luck
* Joerg Roedel <joerg.roedel@amd.com> wrote:
> On Wed, Sep 10, 2008 at 01:06:45AM +0900, FUJITA Tomonori wrote:
> > This patchset adds a trivial helper function, is_buffer_dma_capable()
> > and cleans up some IOMMU implementations. This function is to to see
> > if a memory region is DMA-capable or not with the arguments, the
> > dma_mask (or coherent_dma_mask) of a device and the address and size
> > of a memory region. It's useful for IOMMUs that don't do virtual
> > mappings at all times.
> >
> > The patch for swiotlb is a bug fix (the rest are just
> > cleanups). Currently, swiotlb doesn't take account of the size of a
> > memory region to see if if the memory region is DMA-capable.
>
> Good idea. This patchset increases the readability of the software
> dma-api implementations.
>
> Acked-by: Joerg Roedel <joerg.roedel@amd.com>
applied to tip/x86/iommu:
2797982: swiotlb: convert swiotlb to use is_buffer_dma_capable helper function
49fbf4e: x86: convert pci-nommu to use is_buffer_dma_capable helper function
ac4ff65: x86: convert gart to use is_buffer_dma_capable helper function
636dc67: add is_buffer_dma_capable helper function
thanks!
Ingo
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2008-09-10 9:35 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-09-09 16:06 [PATCH 0/4] add a helper function, is_buffer_dma_capable() FUJITA Tomonori
2008-09-09 16:06 ` [PATCH 1/4] add is_buffer_dma_capable helper function FUJITA Tomonori
2008-09-09 16:06 ` [PATCH 2/4] x86: convert gart to use " FUJITA Tomonori
2008-09-09 16:06 ` [PATCH 3/4] x86: convert pci-nommu " FUJITA Tomonori
2008-09-09 16:06 ` [PATCH 4/4] swiotlb: convert swiotlb " FUJITA Tomonori
2008-09-09 16:17 ` [PATCH 1/4] add " Luck, Tony
2008-09-09 16:21 ` Joerg Roedel
2008-09-09 16:49 ` FUJITA Tomonori
2008-09-09 16:27 ` [PATCH 0/4] add a helper function, is_buffer_dma_capable() Joerg Roedel
2008-09-10 9:34 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox