* RE: [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device}
@ 2005-08-30 18:03 Luck, Tony
2005-08-30 18:09 ` John W. Linville
0 siblings, 1 reply; 20+ messages in thread
From: Luck, Tony @ 2005-08-30 18:03 UTC (permalink / raw)
To: John W. Linville, linux-kernel; +Cc: Andi Kleen, discuss, linux-ia64
>+swiotlb_sync_single_range_for_cpu(struct device *hwdev,
>+swiotlb_sync_single_range_for_device(struct device *hwdev,
Huh? These look identical ... same args, same code, just a
different name.
-Tony
^ permalink raw reply [flat|nested] 20+ messages in thread* Re: [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device} 2005-08-30 18:03 [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device} Luck, Tony @ 2005-08-30 18:09 ` John W. Linville 2005-08-30 18:33 ` [rfc patch] swiotlb: consolidate swiotlb_sync_single_* implementations John W. Linville 2005-09-12 14:48 ` [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device} John W. Linville 0 siblings, 2 replies; 20+ messages in thread From: John W. Linville @ 2005-08-30 18:09 UTC (permalink / raw) To: Luck, Tony Cc: linux-kernel, Andi Kleen, discuss, linux-ia64, Asit.K.Mallick, goutham.rao, davidm On Tue, Aug 30, 2005 at 11:03:35AM -0700, Luck, Tony wrote: > > >+swiotlb_sync_single_range_for_cpu(struct device *hwdev, > >+swiotlb_sync_single_range_for_device(struct device *hwdev, > > Huh? These look identical ... same args, same code, just a > different name. Have you looked at the implementations for swiotlb_sync_single_for_cpu and swiotlb_sync_single_for_device? Those are already identical. I'm just following the existing style/practice in that file. I could do an additional patch to rectify the replication in those functions if you'd like? Who is responsible for the swiotlb code? John -- John W. Linville linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
* [rfc patch] swiotlb: consolidate swiotlb_sync_single_* implementations 2005-08-30 18:09 ` John W. Linville @ 2005-08-30 18:33 ` John W. Linville 2005-08-30 18:40 ` [rfc patch] swiotlb: consolidate swiotlb_sync_sg_* implementations John W. Linville 2005-09-12 14:48 ` [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device} John W. Linville 1 sibling, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-08-30 18:33 UTC (permalink / raw) To: linux-kernel; +Cc: Andi Kleen, discuss, tony.luck, linux-ia64, Asit.K.Mallick On Tue, Aug 30, 2005 at 02:09:14PM -0400, John W. Linville wrote: > On Tue, Aug 30, 2005 at 11:03:35AM -0700, Luck, Tony wrote: > > > > >+swiotlb_sync_single_range_for_cpu(struct device *hwdev, > > >+swiotlb_sync_single_range_for_device(struct device *hwdev, > > > > Huh? These look identical ... same args, same code, just a > > different name. > > Have you looked at the implementations for swiotlb_sync_single_for_cpu > and swiotlb_sync_single_for_device? Those are already identical. How about a patch like this? Just for comment...I'll repost if people want it... John P.S. This is meant to apply on top of my previous swiotlb patch... --- linux-8_29_2005/arch/ia64/lib/swiotlb.c.orig 2005-08-30 14:19:32.000000000 -0400 +++ linux-8_29_2005/arch/ia64/lib/swiotlb.c 2005-08-30 14:23:18.000000000 -0400 @@ -493,11 +493,11 @@ swiotlb_unmap_single(struct device *hwde * address back to the card, you must first perform a * swiotlb_dma_sync_for_device, and then the device again owns the buffer */ -void -swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr, - size_t size, int dir) +static inline void +swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr, + unsigned long offset, size_t size, int dir) { - char *dma_addr = phys_to_virt(dev_addr); + char *dma_addr = phys_to_virt(dev_addr) + offset; if (dir == DMA_NONE) BUG(); @@ -508,17 +508,17 @@ swiotlb_sync_single_for_cpu(struct devic } void +swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr, + size_t size, int dir) +{ + swiotlb_sync_single_range(hwdev, dev_addr, 0, size, dir); +} + +void swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr, size_t size, int dir) { - char *dma_addr = phys_to_virt(dev_addr); - - if (dir == DMA_NONE) - BUG(); - if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) - sync_single(hwdev, dma_addr, size, dir); - else if (dir == DMA_FROM_DEVICE) - mark_clean(dma_addr, size); + swiotlb_sync_single_range(hwdev, dev_addr, 0, size, dir); } /* @@ -528,28 +528,14 @@ void swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr, unsigned long offset, size_t size, int dir) { - char *dma_addr = phys_to_virt(dev_addr) + offset; - - if (dir == DMA_NONE) - BUG(); - if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) - sync_single(hwdev, dma_addr, size, dir); - else if (dir == DMA_FROM_DEVICE) - mark_clean(dma_addr, size); + swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir); } void swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr, unsigned long offset, size_t size, int dir) { - char *dma_addr = phys_to_virt(dev_addr) + offset; - - if (dir == DMA_NONE) - BUG(); - if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) - sync_single(hwdev, dma_addr, size, dir); - else if (dir == DMA_FROM_DEVICE) - mark_clean(dma_addr, size); + swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir); } /* -- John W. Linville linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
* [rfc patch] swiotlb: consolidate swiotlb_sync_sg_* implementations 2005-08-30 18:33 ` [rfc patch] swiotlb: consolidate swiotlb_sync_single_* implementations John W. Linville @ 2005-08-30 18:40 ` John W. Linville 0 siblings, 0 replies; 20+ messages in thread From: John W. Linville @ 2005-08-30 18:40 UTC (permalink / raw) To: linux-kernel; +Cc: Andi Kleen, discuss, tony.luck, linux-ia64, Asit.K.Mallick On Tue, Aug 30, 2005 at 02:33:39PM -0400, John W. Linville wrote: > On Tue, Aug 30, 2005 at 02:09:14PM -0400, John W. Linville wrote: > > On Tue, Aug 30, 2005 at 11:03:35AM -0700, Luck, Tony wrote: > > > > > > >+swiotlb_sync_single_range_for_cpu(struct device *hwdev, > > > >+swiotlb_sync_single_range_for_device(struct device *hwdev, > > > > > > Huh? These look identical ... same args, same code, just a > > > different name. > > > > Have you looked at the implementations for swiotlb_sync_single_for_cpu > > and swiotlb_sync_single_for_device? Those are already identical. > > How about a patch like this? Just for comment...I'll repost if people > want it... Probably should include the swiotlb_sync_sg_* variations too... Whaddya think? Again, I'll repost if this is viewed favorably. John --- linux-8_29_2005/arch/ia64/lib/swiotlb.c.orig 2005-08-30 14:35:35.000000000 -0400 +++ linux-8_29_2005/arch/ia64/lib/swiotlb.c 2005-08-30 14:37:05.000000000 -0400 @@ -612,9 +612,9 @@ swiotlb_unmap_sg(struct device *hwdev, s * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules * and usage. */ -void -swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, - int nelems, int dir) +static inline void +swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg, + int nelems, int dir) { int i; @@ -628,18 +628,17 @@ swiotlb_sync_sg_for_cpu(struct device *h } void +swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, + int nelems, int dir) +{ + swiotlb_sync_sg(hwdev, sg, nelems, dir); +} + +void swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg, int nelems, int dir) { - int i; - - if (dir == DMA_NONE) - BUG(); - - for (i = 0; i < nelems; i++, sg++) - if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) - sync_single(hwdev, (void *) sg->dma_address, - sg->dma_length, dir); + swiotlb_sync_sg(hwdev, sg, nelems, dir); } int -- John W. Linville linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device} 2005-08-30 18:09 ` John W. Linville 2005-08-30 18:33 ` [rfc patch] swiotlb: consolidate swiotlb_sync_single_* implementations John W. Linville @ 2005-09-12 14:48 ` John W. Linville 2005-09-12 14:48 ` [patch 2.6.13 1/6] swiotlb: move from arch/ia64/lib to lib John W. Linville 1 sibling, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw) To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick Conduct some maintenance of the swiotlb code: -- Move the code from arch/ia64/lib to lib -- Cleanup some cruft (code duplication) -- Add support for syncing sub-ranges of mappings -- Add support for syncing DMA_BIDIRECTIONAL mappings -- Comment fixup & change record Also, tack-on an x86_64 implementation of dma_sync_single_range_for_cpu and dma_sync_single_range_for _device. This makes use of the new swiotlb sync sub-range support. Patches to follow... ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13 1/6] swiotlb: move from arch/ia64/lib to lib 2005-09-12 14:48 ` [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device} John W. Linville @ 2005-09-12 14:48 ` John W. Linville 2005-09-12 14:48 ` [patch 2.6.13 2/6] swiotlb: cleanup some code duplication cruft John W. Linville 0 siblings, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw) To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick The swiotlb implementation is shared by both IA-64 and EM64T. However, the source itself lives under arch/ia64. This patch moves swiotlb.c from arch/ia64/lib to lib and fixes-up the appropriate Makefile and Kconfig files. No actual changes are made to swiotlb.c. Signed-off-by: John W. Linville <linville@tuxdriver.com> --- arch/ia64/Kconfig | 4 arch/ia64/lib/Makefile | 2 arch/ia64/lib/swiotlb.c | 657 -------------------------------------------- arch/x86_64/kernel/Makefile | 2 lib/Makefile | 2 lib/swiotlb.c | 657 ++++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 664 insertions(+), 660 deletions(-) --- linux-swiotlb-9_9_2005/lib/swiotlb.c.orig 2005-09-09 16:18:36.000000000 -0400 +++ linux-swiotlb-9_9_2005/lib/swiotlb.c 2005-09-09 16:18:27.000000000 -0400 @@ -0,0 +1,657 @@ +/* + * Dynamic DMA mapping support. + * + * This implementation is for IA-64 platforms that do not support + * I/O TLBs (aka DMA address translation hardware). + * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com> + * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com> + * Copyright (C) 2000, 2003 Hewlett-Packard Co + * David Mosberger-Tang <davidm@hpl.hp.com> + * + * 03/05/07 davidm Switch from PCI-DMA to generic device DMA API. + * 00/12/13 davidm Rename to swiotlb.c and add mark_clean() to avoid + * unnecessary i-cache flushing. + * 04/07/.. ak Better overflow handling. Assorted fixes. + */ + +#include <linux/cache.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/pci.h> +#include <linux/spinlock.h> +#include <linux/string.h> +#include <linux/types.h> +#include <linux/ctype.h> + +#include <asm/io.h> +#include <asm/pci.h> +#include <asm/dma.h> + +#include <linux/init.h> +#include <linux/bootmem.h> + +#define OFFSET(val,align) ((unsigned long) \ + ( (val) & ( (align) - 1))) + +#define SG_ENT_VIRT_ADDRESS(sg) (page_address((sg)->page) + (sg)->offset) +#define SG_ENT_PHYS_ADDRESS(SG) virt_to_phys(SG_ENT_VIRT_ADDRESS(SG)) + +/* + * Maximum allowable number of contiguous slabs to map, + * must be a power of 2. What is the appropriate value ? + * The complexity of {map,unmap}_single is linearly dependent on this value. + */ +#define IO_TLB_SEGSIZE 128 + +/* + * log of the size of each IO TLB slab. The number of slabs is command line + * controllable. + */ +#define IO_TLB_SHIFT 11 + +int swiotlb_force; + +/* + * Used to do a quick range check in swiotlb_unmap_single and + * swiotlb_sync_single_*, to see if the memory was in fact allocated by this + * API. + */ +static char *io_tlb_start, *io_tlb_end; + +/* + * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and + * io_tlb_end. This is command line adjustable via setup_io_tlb_npages. + */ +static unsigned long io_tlb_nslabs; + +/* + * When the IOMMU overflows we return a fallback buffer. This sets the size. + */ +static unsigned long io_tlb_overflow = 32*1024; + +void *io_tlb_overflow_buffer; + +/* + * This is a free list describing the number of free entries available from + * each index + */ +static unsigned int *io_tlb_list; +static unsigned int io_tlb_index; + +/* + * We need to save away the original address corresponding to a mapped entry + * for the sync operations. + */ +static unsigned char **io_tlb_orig_addr; + +/* + * Protect the above data structures in the map and unmap calls + */ +static DEFINE_SPINLOCK(io_tlb_lock); + +static int __init +setup_io_tlb_npages(char *str) +{ + if (isdigit(*str)) { + io_tlb_nslabs = simple_strtoul(str, &str, 0); + /* avoid tail segment of size < IO_TLB_SEGSIZE */ + io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + } + if (*str == ',') + ++str; + if (!strcmp(str, "force")) + swiotlb_force = 1; + return 1; +} +__setup("swiotlb=", setup_io_tlb_npages); +/* make io_tlb_overflow tunable too? */ + +/* + * Statically reserve bounce buffer space and initialize bounce buffer data + * structures for the software IO TLB used to implement the PCI DMA API. + */ +void +swiotlb_init_with_default_size (size_t default_size) +{ + unsigned long i; + + if (!io_tlb_nslabs) { + io_tlb_nslabs = (default_size >> IO_TLB_SHIFT); + io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); + } + + /* + * Get IO TLB memory from the low pages + */ + io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs * + (1 << IO_TLB_SHIFT)); + if (!io_tlb_start) + panic("Cannot allocate SWIOTLB buffer"); + io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT); + + /* + * Allocate and initialize the free list array. This array is used + * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE + * between io_tlb_start and io_tlb_end. + */ + io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int)); + for (i = 0; i < io_tlb_nslabs; i++) + io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); + io_tlb_index = 0; + io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *)); + + /* + * Get the overflow emergency buffer + */ + io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow); + printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n", + virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end)); +} + +void +swiotlb_init (void) +{ + swiotlb_init_with_default_size(64 * (1<<20)); /* default to 64MB */ +} + +static inline int +address_needs_mapping(struct device *hwdev, dma_addr_t addr) +{ + dma_addr_t mask = 0xffffffff; + /* If the device has a mask, use it, otherwise default to 32 bits */ + if (hwdev && hwdev->dma_mask) + mask = *hwdev->dma_mask; + return (addr & ~mask) != 0; +} + +/* + * Allocates bounce buffer and returns its kernel virtual address. + */ +static void * +map_single(struct device *hwdev, char *buffer, size_t size, int dir) +{ + unsigned long flags; + char *dma_addr; + unsigned int nslots, stride, index, wrap; + int i; + + /* + * For mappings greater than a page, we limit the stride (and + * hence alignment) to a page size. + */ + nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; + if (size > PAGE_SIZE) + stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT)); + else + stride = 1; + + if (!nslots) + BUG(); + + /* + * Find suitable number of IO TLB entries size that will fit this + * request and allocate a buffer from that IO TLB pool. + */ + spin_lock_irqsave(&io_tlb_lock, flags); + { + wrap = index = ALIGN(io_tlb_index, stride); + + if (index >= io_tlb_nslabs) + wrap = index = 0; + + do { + /* + * If we find a slot that indicates we have 'nslots' + * number of contiguous buffers, we allocate the + * buffers from that slot and mark the entries as '0' + * indicating unavailable. + */ + if (io_tlb_list[index] >= nslots) { + int count = 0; + + for (i = index; i < (int) (index + nslots); i++) + io_tlb_list[i] = 0; + for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--) + io_tlb_list[i] = ++count; + dma_addr = io_tlb_start + (index << IO_TLB_SHIFT); + + /* + * Update the indices to avoid searching in + * the next round. + */ + io_tlb_index = ((index + nslots) < io_tlb_nslabs + ? (index + nslots) : 0); + + goto found; + } + index += stride; + if (index >= io_tlb_nslabs) + index = 0; + } while (index != wrap); + + spin_unlock_irqrestore(&io_tlb_lock, flags); + return NULL; + } + found: + spin_unlock_irqrestore(&io_tlb_lock, flags); + + /* + * Save away the mapping from the original address to the DMA address. + * This is needed when we sync the memory. Then we sync the buffer if + * needed. + */ + io_tlb_orig_addr[index] = buffer; + if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) + memcpy(dma_addr, buffer, size); + + return dma_addr; +} + +/* + * dma_addr is the kernel virtual address of the bounce buffer to unmap. + */ +static void +unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir) +{ + unsigned long flags; + int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; + int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT; + char *buffer = io_tlb_orig_addr[index]; + + /* + * First, sync the memory before unmapping the entry + */ + if (buffer && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) + /* + * bounce... copy the data back into the original buffer * and + * delete the bounce buffer. + */ + memcpy(buffer, dma_addr, size); + + /* + * Return the buffer to the free list by setting the corresponding + * entries to indicate the number of contigous entries available. + * While returning the entries to the free list, we merge the entries + * with slots below and above the pool being returned. + */ + spin_lock_irqsave(&io_tlb_lock, flags); + { + count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ? + io_tlb_list[index + nslots] : 0); + /* + * Step 1: return the slots to the free list, merging the + * slots with superceeding slots + */ + for (i = index + nslots - 1; i >= index; i--) + io_tlb_list[i] = ++count; + /* + * Step 2: merge the returned slots with the preceding slots, + * if available (non zero) + */ + for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--) + io_tlb_list[i] = ++count; + } + spin_unlock_irqrestore(&io_tlb_lock, flags); +} + +static void +sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir) +{ + int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT; + char *buffer = io_tlb_orig_addr[index]; + + /* + * bounce... copy the data back into/from the original buffer + * XXX How do you handle DMA_BIDIRECTIONAL here ? + */ + if (dir == DMA_FROM_DEVICE) + memcpy(buffer, dma_addr, size); + else if (dir == DMA_TO_DEVICE) + memcpy(dma_addr, buffer, size); + else + BUG(); +} + +void * +swiotlb_alloc_coherent(struct device *hwdev, size_t size, + dma_addr_t *dma_handle, int flags) +{ + unsigned long dev_addr; + void *ret; + int order = get_order(size); + + /* + * XXX fix me: the DMA API should pass us an explicit DMA mask + * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32 + * bit range instead of a 16MB one). + */ + flags |= GFP_DMA; + + ret = (void *)__get_free_pages(flags, order); + if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) { + /* + * The allocated memory isn't reachable by the device. + * Fall back on swiotlb_map_single(). + */ + free_pages((unsigned long) ret, order); + ret = NULL; + } + if (!ret) { + /* + * We are either out of memory or the device can't DMA + * to GFP_DMA memory; fall back on + * swiotlb_map_single(), which will grab memory from + * the lowest available address range. + */ + dma_addr_t handle; + handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE); + if (dma_mapping_error(handle)) + return NULL; + + ret = phys_to_virt(handle); + } + + memset(ret, 0, size); + dev_addr = virt_to_phys(ret); + + /* Confirm address can be DMA'd by device */ + if (address_needs_mapping(hwdev, dev_addr)) { + printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n", + (unsigned long long)*hwdev->dma_mask, dev_addr); + panic("swiotlb_alloc_coherent: allocated memory is out of " + "range for device"); + } + *dma_handle = dev_addr; + return ret; +} + +void +swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, + dma_addr_t dma_handle) +{ + if (!(vaddr >= (void *)io_tlb_start + && vaddr < (void *)io_tlb_end)) + free_pages((unsigned long) vaddr, get_order(size)); + else + /* DMA_TO_DEVICE to avoid memcpy in unmap_single */ + swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE); +} + +static void +swiotlb_full(struct device *dev, size_t size, int dir, int do_panic) +{ + /* + * Ran out of IOMMU space for this operation. This is very bad. + * Unfortunately the drivers cannot handle this operation properly. + * unless they check for pci_dma_mapping_error (most don't) + * When the mapping is small enough return a static buffer to limit + * the damage, or panic when the transfer is too big. + */ + printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at " + "device %s\n", size, dev ? dev->bus_id : "?"); + + if (size > io_tlb_overflow && do_panic) { + if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL) + panic("PCI-DMA: Memory would be corrupted\n"); + if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL) + panic("PCI-DMA: Random memory would be DMAed\n"); + } +} + +/* + * Map a single buffer of the indicated size for DMA in streaming mode. The + * PCI address to use is returned. + * + * Once the device is given the dma address, the device owns this memory until + * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed. + */ +dma_addr_t +swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir) +{ + unsigned long dev_addr = virt_to_phys(ptr); + void *map; + + if (dir == DMA_NONE) + BUG(); + /* + * If the pointer passed in happens to be in the device's DMA window, + * we can safely return the device addr and not worry about bounce + * buffering it. + */ + if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force) + return dev_addr; + + /* + * Oh well, have to allocate and map a bounce buffer. + */ + map = map_single(hwdev, ptr, size, dir); + if (!map) { + swiotlb_full(hwdev, size, dir, 1); + map = io_tlb_overflow_buffer; + } + + dev_addr = virt_to_phys(map); + + /* + * Ensure that the address returned is DMA'ble + */ + if (address_needs_mapping(hwdev, dev_addr)) + panic("map_single: bounce buffer is not DMA'ble"); + + return dev_addr; +} + +/* + * Since DMA is i-cache coherent, any (complete) pages that were written via + * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to + * flush them when they get mapped into an executable vm-area. + */ +static void +mark_clean(void *addr, size_t size) +{ + unsigned long pg_addr, end; + + pg_addr = PAGE_ALIGN((unsigned long) addr); + end = (unsigned long) addr + size; + while (pg_addr + PAGE_SIZE <= end) { + struct page *page = virt_to_page(pg_addr); + set_bit(PG_arch_1, &page->flags); + pg_addr += PAGE_SIZE; + } +} + +/* + * Unmap a single streaming mode DMA translation. The dma_addr and size must + * match what was provided for in a previous swiotlb_map_single call. All + * other usages are undefined. + * + * After this call, reads by the cpu to the buffer are guaranteed to see + * whatever the device wrote there. + */ +void +swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size, + int dir) +{ + char *dma_addr = phys_to_virt(dev_addr); + + if (dir == DMA_NONE) + BUG(); + if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) + unmap_single(hwdev, dma_addr, size, dir); + else if (dir == DMA_FROM_DEVICE) + mark_clean(dma_addr, size); +} + +/* + * Make physical memory consistent for a single streaming mode DMA translation + * after a transfer. + * + * If you perform a swiotlb_map_single() but wish to interrogate the buffer + * using the cpu, yet do not wish to teardown the PCI dma mapping, you must + * call this function before doing so. At the next point you give the PCI dma + * address back to the card, you must first perform a + * swiotlb_dma_sync_for_device, and then the device again owns the buffer + */ +void +swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr, + size_t size, int dir) +{ + char *dma_addr = phys_to_virt(dev_addr); + + if (dir == DMA_NONE) + BUG(); + if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) + sync_single(hwdev, dma_addr, size, dir); + else if (dir == DMA_FROM_DEVICE) + mark_clean(dma_addr, size); +} + +void +swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr, + size_t size, int dir) +{ + char *dma_addr = phys_to_virt(dev_addr); + + if (dir == DMA_NONE) + BUG(); + if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) + sync_single(hwdev, dma_addr, size, dir); + else if (dir == DMA_FROM_DEVICE) + mark_clean(dma_addr, size); +} + +/* + * Map a set of buffers described by scatterlist in streaming mode for DMA. + * This is the scatter-gather version of the above swiotlb_map_single + * interface. Here the scatter gather list elements are each tagged with the + * appropriate dma address and length. They are obtained via + * sg_dma_{address,length}(SG). + * + * NOTE: An implementation may be able to use a smaller number of + * DMA address/length pairs than there are SG table elements. + * (for example via virtual mapping capabilities) + * The routine returns the number of addr/length pairs actually + * used, at most nents. + * + * Device ownership issues as mentioned above for swiotlb_map_single are the + * same here. + */ +int +swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems, + int dir) +{ + void *addr; + unsigned long dev_addr; + int i; + + if (dir == DMA_NONE) + BUG(); + + for (i = 0; i < nelems; i++, sg++) { + addr = SG_ENT_VIRT_ADDRESS(sg); + dev_addr = virt_to_phys(addr); + if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) { + sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir)); + if (!sg->dma_address) { + /* Don't panic here, we expect map_sg users + to do proper error handling. */ + swiotlb_full(hwdev, sg->length, dir, 0); + swiotlb_unmap_sg(hwdev, sg - i, i, dir); + sg[0].dma_length = 0; + return 0; + } + } else + sg->dma_address = dev_addr; + sg->dma_length = sg->length; + } + return nelems; +} + +/* + * Unmap a set of streaming mode DMA translations. Again, cpu read rules + * concerning calls here are the same as for swiotlb_unmap_single() above. + */ +void +swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems, + int dir) +{ + int i; + + if (dir == DMA_NONE) + BUG(); + + for (i = 0; i < nelems; i++, sg++) + if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) + unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir); + else if (dir == DMA_FROM_DEVICE) + mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length); +} + +/* + * Make physical memory consistent for a set of streaming mode DMA translations + * after a transfer. + * + * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules + * and usage. + */ +void +swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, + int nelems, int dir) +{ + int i; + + if (dir == DMA_NONE) + BUG(); + + for (i = 0; i < nelems; i++, sg++) + if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) + sync_single(hwdev, (void *) sg->dma_address, + sg->dma_length, dir); +} + +void +swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg, + int nelems, int dir) +{ + int i; + + if (dir == DMA_NONE) + BUG(); + + for (i = 0; i < nelems; i++, sg++) + if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) + sync_single(hwdev, (void *) sg->dma_address, + sg->dma_length, dir); +} + +int +swiotlb_dma_mapping_error(dma_addr_t dma_addr) +{ + return (dma_addr == virt_to_phys(io_tlb_overflow_buffer)); +} + +/* + * Return whether the given PCI device DMA address mask can be supported + * properly. For example, if your device can only drive the low 24-bits + * during PCI bus mastering, then you would pass 0x00ffffff as the mask to + * this function. + */ +int +swiotlb_dma_supported (struct device *hwdev, u64 mask) +{ + return (virt_to_phys (io_tlb_end) - 1) <= mask; +} + +EXPORT_SYMBOL(swiotlb_init); +EXPORT_SYMBOL(swiotlb_map_single); +EXPORT_SYMBOL(swiotlb_unmap_single); +EXPORT_SYMBOL(swiotlb_map_sg); +EXPORT_SYMBOL(swiotlb_unmap_sg); +EXPORT_SYMBOL(swiotlb_sync_single_for_cpu); +EXPORT_SYMBOL(swiotlb_sync_single_for_device); +EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu); +EXPORT_SYMBOL(swiotlb_sync_sg_for_device); +EXPORT_SYMBOL(swiotlb_dma_mapping_error); +EXPORT_SYMBOL(swiotlb_alloc_coherent); +EXPORT_SYMBOL(swiotlb_free_coherent); +EXPORT_SYMBOL(swiotlb_dma_supported); --- linux-swiotlb-9_9_2005/lib/Makefile.orig 2005-09-09 14:27:39.000000000 -0400 +++ linux-swiotlb-9_9_2005/lib/Makefile 2005-09-09 16:17:44.000000000 -0400 @@ -43,6 +43,8 @@ obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o obj-$(CONFIG_TEXTSEARCH_BM) += ts_bm.o obj-$(CONFIG_TEXTSEARCH_FSM) += ts_fsm.o +obj-$(CONFIG_SWIOTLB) += swiotlb.o + hostprogs-y := gen_crc32table clean-files := crc32table.h --- linux-swiotlb-9_9_2005/arch/x86_64/kernel/Makefile.orig 2005-09-09 14:27:34.000000000 -0400 +++ linux-swiotlb-9_9_2005/arch/x86_64/kernel/Makefile 2005-09-09 16:17:44.000000000 -0400 @@ -27,7 +27,6 @@ obj-$(CONFIG_CPU_FREQ) += cpufreq/ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o obj-$(CONFIG_GART_IOMMU) += pci-gart.o aperture.o obj-$(CONFIG_DUMMY_IOMMU) += pci-nommu.o pci-dma.o -obj-$(CONFIG_SWIOTLB) += swiotlb.o obj-$(CONFIG_KPROBES) += kprobes.o obj-$(CONFIG_X86_PM_TIMER) += pmtimer.o @@ -41,7 +40,6 @@ CFLAGS_vsyscall.o := $(PROFILING) -g0 bootflag-y += ../../i386/kernel/bootflag.o cpuid-$(subst m,y,$(CONFIG_X86_CPUID)) += ../../i386/kernel/cpuid.o topology-y += ../../i386/mach-default/topology.o -swiotlb-$(CONFIG_SWIOTLB) += ../../ia64/lib/swiotlb.o microcode-$(subst m,y,$(CONFIG_MICROCODE)) += ../../i386/kernel/microcode.o intel_cacheinfo-y += ../../i386/kernel/cpu/intel_cacheinfo.o quirks-y += ../../i386/kernel/quirks.o --- linux-swiotlb-9_9_2005/arch/ia64/Kconfig.orig 2005-09-09 14:27:33.000000000 -0400 +++ linux-swiotlb-9_9_2005/arch/ia64/Kconfig 2005-09-09 16:17:44.000000000 -0400 @@ -26,6 +26,10 @@ config MMU bool default y +config SWIOTLB + bool + default y + config RWSEM_XCHGADD_ALGORITHM bool default y --- linux-swiotlb-9_9_2005/arch/ia64/lib/swiotlb.c.orig 2005-09-09 14:27:33.000000000 -0400 +++ linux-swiotlb-9_9_2005/arch/ia64/lib/swiotlb.c 2005-09-09 16:18:46.000000000 -0400 @@ -1,657 +0,0 @@ -/* - * Dynamic DMA mapping support. - * - * This implementation is for IA-64 platforms that do not support - * I/O TLBs (aka DMA address translation hardware). - * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com> - * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com> - * Copyright (C) 2000, 2003 Hewlett-Packard Co - * David Mosberger-Tang <davidm@hpl.hp.com> - * - * 03/05/07 davidm Switch from PCI-DMA to generic device DMA API. - * 00/12/13 davidm Rename to swiotlb.c and add mark_clean() to avoid - * unnecessary i-cache flushing. - * 04/07/.. ak Better overflow handling. Assorted fixes. - */ - -#include <linux/cache.h> -#include <linux/mm.h> -#include <linux/module.h> -#include <linux/pci.h> -#include <linux/spinlock.h> -#include <linux/string.h> -#include <linux/types.h> -#include <linux/ctype.h> - -#include <asm/io.h> -#include <asm/pci.h> -#include <asm/dma.h> - -#include <linux/init.h> -#include <linux/bootmem.h> - -#define OFFSET(val,align) ((unsigned long) \ - ( (val) & ( (align) - 1))) - -#define SG_ENT_VIRT_ADDRESS(sg) (page_address((sg)->page) + (sg)->offset) -#define SG_ENT_PHYS_ADDRESS(SG) virt_to_phys(SG_ENT_VIRT_ADDRESS(SG)) - -/* - * Maximum allowable number of contiguous slabs to map, - * must be a power of 2. What is the appropriate value ? - * The complexity of {map,unmap}_single is linearly dependent on this value. - */ -#define IO_TLB_SEGSIZE 128 - -/* - * log of the size of each IO TLB slab. The number of slabs is command line - * controllable. - */ -#define IO_TLB_SHIFT 11 - -int swiotlb_force; - -/* - * Used to do a quick range check in swiotlb_unmap_single and - * swiotlb_sync_single_*, to see if the memory was in fact allocated by this - * API. - */ -static char *io_tlb_start, *io_tlb_end; - -/* - * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and - * io_tlb_end. This is command line adjustable via setup_io_tlb_npages. - */ -static unsigned long io_tlb_nslabs; - -/* - * When the IOMMU overflows we return a fallback buffer. This sets the size. - */ -static unsigned long io_tlb_overflow = 32*1024; - -void *io_tlb_overflow_buffer; - -/* - * This is a free list describing the number of free entries available from - * each index - */ -static unsigned int *io_tlb_list; -static unsigned int io_tlb_index; - -/* - * We need to save away the original address corresponding to a mapped entry - * for the sync operations. - */ -static unsigned char **io_tlb_orig_addr; - -/* - * Protect the above data structures in the map and unmap calls - */ -static DEFINE_SPINLOCK(io_tlb_lock); - -static int __init -setup_io_tlb_npages(char *str) -{ - if (isdigit(*str)) { - io_tlb_nslabs = simple_strtoul(str, &str, 0); - /* avoid tail segment of size < IO_TLB_SEGSIZE */ - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); - } - if (*str == ',') - ++str; - if (!strcmp(str, "force")) - swiotlb_force = 1; - return 1; -} -__setup("swiotlb=", setup_io_tlb_npages); -/* make io_tlb_overflow tunable too? */ - -/* - * Statically reserve bounce buffer space and initialize bounce buffer data - * structures for the software IO TLB used to implement the PCI DMA API. - */ -void -swiotlb_init_with_default_size (size_t default_size) -{ - unsigned long i; - - if (!io_tlb_nslabs) { - io_tlb_nslabs = (default_size >> IO_TLB_SHIFT); - io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE); - } - - /* - * Get IO TLB memory from the low pages - */ - io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs * - (1 << IO_TLB_SHIFT)); - if (!io_tlb_start) - panic("Cannot allocate SWIOTLB buffer"); - io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT); - - /* - * Allocate and initialize the free list array. This array is used - * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE - * between io_tlb_start and io_tlb_end. - */ - io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int)); - for (i = 0; i < io_tlb_nslabs; i++) - io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE); - io_tlb_index = 0; - io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *)); - - /* - * Get the overflow emergency buffer - */ - io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow); - printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n", - virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end)); -} - -void -swiotlb_init (void) -{ - swiotlb_init_with_default_size(64 * (1<<20)); /* default to 64MB */ -} - -static inline int -address_needs_mapping(struct device *hwdev, dma_addr_t addr) -{ - dma_addr_t mask = 0xffffffff; - /* If the device has a mask, use it, otherwise default to 32 bits */ - if (hwdev && hwdev->dma_mask) - mask = *hwdev->dma_mask; - return (addr & ~mask) != 0; -} - -/* - * Allocates bounce buffer and returns its kernel virtual address. - */ -static void * -map_single(struct device *hwdev, char *buffer, size_t size, int dir) -{ - unsigned long flags; - char *dma_addr; - unsigned int nslots, stride, index, wrap; - int i; - - /* - * For mappings greater than a page, we limit the stride (and - * hence alignment) to a page size. - */ - nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; - if (size > PAGE_SIZE) - stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT)); - else - stride = 1; - - if (!nslots) - BUG(); - - /* - * Find suitable number of IO TLB entries size that will fit this - * request and allocate a buffer from that IO TLB pool. - */ - spin_lock_irqsave(&io_tlb_lock, flags); - { - wrap = index = ALIGN(io_tlb_index, stride); - - if (index >= io_tlb_nslabs) - wrap = index = 0; - - do { - /* - * If we find a slot that indicates we have 'nslots' - * number of contiguous buffers, we allocate the - * buffers from that slot and mark the entries as '0' - * indicating unavailable. - */ - if (io_tlb_list[index] >= nslots) { - int count = 0; - - for (i = index; i < (int) (index + nslots); i++) - io_tlb_list[i] = 0; - for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--) - io_tlb_list[i] = ++count; - dma_addr = io_tlb_start + (index << IO_TLB_SHIFT); - - /* - * Update the indices to avoid searching in - * the next round. - */ - io_tlb_index = ((index + nslots) < io_tlb_nslabs - ? (index + nslots) : 0); - - goto found; - } - index += stride; - if (index >= io_tlb_nslabs) - index = 0; - } while (index != wrap); - - spin_unlock_irqrestore(&io_tlb_lock, flags); - return NULL; - } - found: - spin_unlock_irqrestore(&io_tlb_lock, flags); - - /* - * Save away the mapping from the original address to the DMA address. - * This is needed when we sync the memory. Then we sync the buffer if - * needed. - */ - io_tlb_orig_addr[index] = buffer; - if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) - memcpy(dma_addr, buffer, size); - - return dma_addr; -} - -/* - * dma_addr is the kernel virtual address of the bounce buffer to unmap. - */ -static void -unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir) -{ - unsigned long flags; - int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT; - int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT; - char *buffer = io_tlb_orig_addr[index]; - - /* - * First, sync the memory before unmapping the entry - */ - if (buffer && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL))) - /* - * bounce... copy the data back into the original buffer * and - * delete the bounce buffer. - */ - memcpy(buffer, dma_addr, size); - - /* - * Return the buffer to the free list by setting the corresponding - * entries to indicate the number of contigous entries available. - * While returning the entries to the free list, we merge the entries - * with slots below and above the pool being returned. - */ - spin_lock_irqsave(&io_tlb_lock, flags); - { - count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ? - io_tlb_list[index + nslots] : 0); - /* - * Step 1: return the slots to the free list, merging the - * slots with superceeding slots - */ - for (i = index + nslots - 1; i >= index; i--) - io_tlb_list[i] = ++count; - /* - * Step 2: merge the returned slots with the preceding slots, - * if available (non zero) - */ - for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--) - io_tlb_list[i] = ++count; - } - spin_unlock_irqrestore(&io_tlb_lock, flags); -} - -static void -sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir) -{ - int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT; - char *buffer = io_tlb_orig_addr[index]; - - /* - * bounce... copy the data back into/from the original buffer - * XXX How do you handle DMA_BIDIRECTIONAL here ? - */ - if (dir == DMA_FROM_DEVICE) - memcpy(buffer, dma_addr, size); - else if (dir == DMA_TO_DEVICE) - memcpy(dma_addr, buffer, size); - else - BUG(); -} - -void * -swiotlb_alloc_coherent(struct device *hwdev, size_t size, - dma_addr_t *dma_handle, int flags) -{ - unsigned long dev_addr; - void *ret; - int order = get_order(size); - - /* - * XXX fix me: the DMA API should pass us an explicit DMA mask - * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32 - * bit range instead of a 16MB one). - */ - flags |= GFP_DMA; - - ret = (void *)__get_free_pages(flags, order); - if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) { - /* - * The allocated memory isn't reachable by the device. - * Fall back on swiotlb_map_single(). - */ - free_pages((unsigned long) ret, order); - ret = NULL; - } - if (!ret) { - /* - * We are either out of memory or the device can't DMA - * to GFP_DMA memory; fall back on - * swiotlb_map_single(), which will grab memory from - * the lowest available address range. - */ - dma_addr_t handle; - handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE); - if (dma_mapping_error(handle)) - return NULL; - - ret = phys_to_virt(handle); - } - - memset(ret, 0, size); - dev_addr = virt_to_phys(ret); - - /* Confirm address can be DMA'd by device */ - if (address_needs_mapping(hwdev, dev_addr)) { - printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n", - (unsigned long long)*hwdev->dma_mask, dev_addr); - panic("swiotlb_alloc_coherent: allocated memory is out of " - "range for device"); - } - *dma_handle = dev_addr; - return ret; -} - -void -swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, - dma_addr_t dma_handle) -{ - if (!(vaddr >= (void *)io_tlb_start - && vaddr < (void *)io_tlb_end)) - free_pages((unsigned long) vaddr, get_order(size)); - else - /* DMA_TO_DEVICE to avoid memcpy in unmap_single */ - swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE); -} - -static void -swiotlb_full(struct device *dev, size_t size, int dir, int do_panic) -{ - /* - * Ran out of IOMMU space for this operation. This is very bad. - * Unfortunately the drivers cannot handle this operation properly. - * unless they check for pci_dma_mapping_error (most don't) - * When the mapping is small enough return a static buffer to limit - * the damage, or panic when the transfer is too big. - */ - printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at " - "device %s\n", size, dev ? dev->bus_id : "?"); - - if (size > io_tlb_overflow && do_panic) { - if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL) - panic("PCI-DMA: Memory would be corrupted\n"); - if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL) - panic("PCI-DMA: Random memory would be DMAed\n"); - } -} - -/* - * Map a single buffer of the indicated size for DMA in streaming mode. The - * PCI address to use is returned. - * - * Once the device is given the dma address, the device owns this memory until - * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed. - */ -dma_addr_t -swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir) -{ - unsigned long dev_addr = virt_to_phys(ptr); - void *map; - - if (dir == DMA_NONE) - BUG(); - /* - * If the pointer passed in happens to be in the device's DMA window, - * we can safely return the device addr and not worry about bounce - * buffering it. - */ - if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force) - return dev_addr; - - /* - * Oh well, have to allocate and map a bounce buffer. - */ - map = map_single(hwdev, ptr, size, dir); - if (!map) { - swiotlb_full(hwdev, size, dir, 1); - map = io_tlb_overflow_buffer; - } - - dev_addr = virt_to_phys(map); - - /* - * Ensure that the address returned is DMA'ble - */ - if (address_needs_mapping(hwdev, dev_addr)) - panic("map_single: bounce buffer is not DMA'ble"); - - return dev_addr; -} - -/* - * Since DMA is i-cache coherent, any (complete) pages that were written via - * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to - * flush them when they get mapped into an executable vm-area. - */ -static void -mark_clean(void *addr, size_t size) -{ - unsigned long pg_addr, end; - - pg_addr = PAGE_ALIGN((unsigned long) addr); - end = (unsigned long) addr + size; - while (pg_addr + PAGE_SIZE <= end) { - struct page *page = virt_to_page(pg_addr); - set_bit(PG_arch_1, &page->flags); - pg_addr += PAGE_SIZE; - } -} - -/* - * Unmap a single streaming mode DMA translation. The dma_addr and size must - * match what was provided for in a previous swiotlb_map_single call. All - * other usages are undefined. - * - * After this call, reads by the cpu to the buffer are guaranteed to see - * whatever the device wrote there. - */ -void -swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size, - int dir) -{ - char *dma_addr = phys_to_virt(dev_addr); - - if (dir == DMA_NONE) - BUG(); - if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) - unmap_single(hwdev, dma_addr, size, dir); - else if (dir == DMA_FROM_DEVICE) - mark_clean(dma_addr, size); -} - -/* - * Make physical memory consistent for a single streaming mode DMA translation - * after a transfer. - * - * If you perform a swiotlb_map_single() but wish to interrogate the buffer - * using the cpu, yet do not wish to teardown the PCI dma mapping, you must - * call this function before doing so. At the next point you give the PCI dma - * address back to the card, you must first perform a - * swiotlb_dma_sync_for_device, and then the device again owns the buffer - */ -void -swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr, - size_t size, int dir) -{ - char *dma_addr = phys_to_virt(dev_addr); - - if (dir == DMA_NONE) - BUG(); - if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) - sync_single(hwdev, dma_addr, size, dir); - else if (dir == DMA_FROM_DEVICE) - mark_clean(dma_addr, size); -} - -void -swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr, - size_t size, int dir) -{ - char *dma_addr = phys_to_virt(dev_addr); - - if (dir == DMA_NONE) - BUG(); - if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) - sync_single(hwdev, dma_addr, size, dir); - else if (dir == DMA_FROM_DEVICE) - mark_clean(dma_addr, size); -} - -/* - * Map a set of buffers described by scatterlist in streaming mode for DMA. - * This is the scatter-gather version of the above swiotlb_map_single - * interface. Here the scatter gather list elements are each tagged with the - * appropriate dma address and length. They are obtained via - * sg_dma_{address,length}(SG). - * - * NOTE: An implementation may be able to use a smaller number of - * DMA address/length pairs than there are SG table elements. - * (for example via virtual mapping capabilities) - * The routine returns the number of addr/length pairs actually - * used, at most nents. - * - * Device ownership issues as mentioned above for swiotlb_map_single are the - * same here. - */ -int -swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems, - int dir) -{ - void *addr; - unsigned long dev_addr; - int i; - - if (dir == DMA_NONE) - BUG(); - - for (i = 0; i < nelems; i++, sg++) { - addr = SG_ENT_VIRT_ADDRESS(sg); - dev_addr = virt_to_phys(addr); - if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) { - sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir)); - if (!sg->dma_address) { - /* Don't panic here, we expect map_sg users - to do proper error handling. */ - swiotlb_full(hwdev, sg->length, dir, 0); - swiotlb_unmap_sg(hwdev, sg - i, i, dir); - sg[0].dma_length = 0; - return 0; - } - } else - sg->dma_address = dev_addr; - sg->dma_length = sg->length; - } - return nelems; -} - -/* - * Unmap a set of streaming mode DMA translations. Again, cpu read rules - * concerning calls here are the same as for swiotlb_unmap_single() above. - */ -void -swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems, - int dir) -{ - int i; - - if (dir == DMA_NONE) - BUG(); - - for (i = 0; i < nelems; i++, sg++) - if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) - unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir); - else if (dir == DMA_FROM_DEVICE) - mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length); -} - -/* - * Make physical memory consistent for a set of streaming mode DMA translations - * after a transfer. - * - * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules - * and usage. - */ -void -swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, - int nelems, int dir) -{ - int i; - - if (dir == DMA_NONE) - BUG(); - - for (i = 0; i < nelems; i++, sg++) - if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) - sync_single(hwdev, (void *) sg->dma_address, - sg->dma_length, dir); -} - -void -swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg, - int nelems, int dir) -{ - int i; - - if (dir == DMA_NONE) - BUG(); - - for (i = 0; i < nelems; i++, sg++) - if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) - sync_single(hwdev, (void *) sg->dma_address, - sg->dma_length, dir); -} - -int -swiotlb_dma_mapping_error(dma_addr_t dma_addr) -{ - return (dma_addr == virt_to_phys(io_tlb_overflow_buffer)); -} - -/* - * Return whether the given PCI device DMA address mask can be supported - * properly. For example, if your device can only drive the low 24-bits - * during PCI bus mastering, then you would pass 0x00ffffff as the mask to - * this function. - */ -int -swiotlb_dma_supported (struct device *hwdev, u64 mask) -{ - return (virt_to_phys (io_tlb_end) - 1) <= mask; -} - -EXPORT_SYMBOL(swiotlb_init); -EXPORT_SYMBOL(swiotlb_map_single); -EXPORT_SYMBOL(swiotlb_unmap_single); -EXPORT_SYMBOL(swiotlb_map_sg); -EXPORT_SYMBOL(swiotlb_unmap_sg); -EXPORT_SYMBOL(swiotlb_sync_single_for_cpu); -EXPORT_SYMBOL(swiotlb_sync_single_for_device); -EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu); -EXPORT_SYMBOL(swiotlb_sync_sg_for_device); -EXPORT_SYMBOL(swiotlb_dma_mapping_error); -EXPORT_SYMBOL(swiotlb_alloc_coherent); -EXPORT_SYMBOL(swiotlb_free_coherent); -EXPORT_SYMBOL(swiotlb_dma_supported); --- linux-swiotlb-9_9_2005/arch/ia64/lib/Makefile.orig 2005-09-09 14:27:33.000000000 -0400 +++ linux-swiotlb-9_9_2005/arch/ia64/lib/Makefile 2005-09-09 16:17:44.000000000 -0400 @@ -9,7 +9,7 @@ lib-y := __divsi3.o __udivsi3.o __modsi3 bitop.o checksum.o clear_page.o csum_partial_copy.o \ clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o \ flush.o ip_fast_csum.o do_csum.o \ - memset.o strlen.o swiotlb.o + memset.o strlen.o lib-$(CONFIG_ITANIUM) += copy_page.o copy_user.o memcpy.o lib-$(CONFIG_MCKINLEY) += copy_page_mck.o memcpy_mck.o ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13 2/6] swiotlb: cleanup some code duplication cruft 2005-09-12 14:48 ` [patch 2.6.13 1/6] swiotlb: move from arch/ia64/lib to lib John W. Linville @ 2005-09-12 14:48 ` John W. Linville 2005-09-12 14:48 ` [patch 2.6.13 3/6] swiotlb: support syncing sub-ranges of mappings John W. Linville 0 siblings, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw) To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick The implementations of swiotlb_sync_single_for_{cpu,device} are identical. Likewise for swiotlb_syng_sg_for_{cpu,device}. This patch move the guts of those functions to two new inline functions, and calls the appropriate one from the bodies of those functions. Signed-off-by: John W. Linville <linville@tuxdriver.com> --- lib/swiotlb.c | 45 ++++++++++++++++++++++----------------------- 1 files changed, 22 insertions(+), 23 deletions(-) diff --git a/lib/swiotlb.c b/lib/swiotlb.c --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -492,9 +492,9 @@ swiotlb_unmap_single(struct device *hwde * address back to the card, you must first perform a * swiotlb_dma_sync_for_device, and then the device again owns the buffer */ -void -swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr, - size_t size, int dir) +static inline void +swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr, + size_t size, int dir) { char *dma_addr = phys_to_virt(dev_addr); @@ -507,17 +507,17 @@ swiotlb_sync_single_for_cpu(struct devic } void +swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr, + size_t size, int dir) +{ + swiotlb_sync_single(hwdev, dev_addr, size, dir); +} + +void swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr, size_t size, int dir) { - char *dma_addr = phys_to_virt(dev_addr); - - if (dir == DMA_NONE) - BUG(); - if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) - sync_single(hwdev, dma_addr, size, dir); - else if (dir == DMA_FROM_DEVICE) - mark_clean(dma_addr, size); + swiotlb_sync_single(hwdev, dev_addr, size, dir); } /* @@ -594,9 +594,9 @@ swiotlb_unmap_sg(struct device *hwdev, s * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules * and usage. */ -void -swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, - int nelems, int dir) +static inline void +swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg, + int nelems, int dir) { int i; @@ -610,18 +610,17 @@ swiotlb_sync_sg_for_cpu(struct device *h } void +swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, + int nelems, int dir) +{ + swiotlb_sync_sg(hwdev, sg, nelems, dir); +} + +void swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg, int nelems, int dir) { - int i; - - if (dir == DMA_NONE) - BUG(); - - for (i = 0; i < nelems; i++, sg++) - if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) - sync_single(hwdev, (void *) sg->dma_address, - sg->dma_length, dir); + swiotlb_sync_sg(hwdev, sg, nelems, dir); } int ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13 3/6] swiotlb: support syncing sub-ranges of mappings 2005-09-12 14:48 ` [patch 2.6.13 2/6] swiotlb: cleanup some code duplication cruft John W. Linville @ 2005-09-12 14:48 ` John W. Linville 2005-09-12 14:48 ` [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings John W. Linville 0 siblings, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw) To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick This patch implements swiotlb_sync_single_range_for_{cpu,device}. This is intended to support an x86_64 implementation of dma_sync_single_range_for_{cpu,device}. Signed-off-by: John W. Linville <linville@tuxdriver.com> --- include/asm-x86_64/swiotlb.h | 8 ++++++++ lib/swiotlb.c | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+) diff --git a/include/asm-x86_64/swiotlb.h b/include/asm-x86_64/swiotlb.h --- a/include/asm-x86_64/swiotlb.h +++ b/include/asm-x86_64/swiotlb.h @@ -15,6 +15,14 @@ extern void swiotlb_sync_single_for_cpu( extern void swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr, size_t size, int dir); +extern void swiotlb_sync_single_range_for_cpu(struct device *hwdev, + dma_addr_t dev_addr, + unsigned long offset, + size_t size, int dir); +extern void swiotlb_sync_single_range_for_device(struct device *hwdev, + dma_addr_t dev_addr, + unsigned long offset, + size_t size, int dir); extern void swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, int nelems, int dir); diff --git a/lib/swiotlb.c b/lib/swiotlb.c --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -521,6 +521,37 @@ swiotlb_sync_single_for_device(struct de } /* + * Same as above, but for a sub-range of the mapping. + */ +static inline void +swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr, + unsigned long offset, size_t size, int dir) +{ + char *dma_addr = phys_to_virt(dev_addr) + offset; + + if (dir == DMA_NONE) + BUG(); + if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) + sync_single(hwdev, dma_addr, size, dir); + else if (dir == DMA_FROM_DEVICE) + mark_clean(dma_addr, size); +} + +void +swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr, + unsigned long offset, size_t size, int dir) +{ + swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir); +} + +void +swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr, + unsigned long offset, size_t size, int dir) +{ + swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir); +} + +/* * Map a set of buffers described by scatterlist in streaming mode for DMA. * This is the scatter-gather version of the above swiotlb_map_single * interface. Here the scatter gather list elements are each tagged with the @@ -648,6 +679,8 @@ EXPORT_SYMBOL(swiotlb_map_sg); EXPORT_SYMBOL(swiotlb_unmap_sg); EXPORT_SYMBOL(swiotlb_sync_single_for_cpu); EXPORT_SYMBOL(swiotlb_sync_single_for_device); +EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_cpu); +EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device); EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu); EXPORT_SYMBOL(swiotlb_sync_sg_for_device); EXPORT_SYMBOL(swiotlb_dma_mapping_error); ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings 2005-09-12 14:48 ` [patch 2.6.13 3/6] swiotlb: support syncing sub-ranges of mappings John W. Linville @ 2005-09-12 14:48 ` John W. Linville 2005-09-12 14:48 ` [patch 2.6.13 5/6] swiotlb: file header comments John W. Linville 2005-09-12 18:51 ` [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings Grant Grundler 0 siblings, 2 replies; 20+ messages in thread From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw) To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick The current implementation of sync_single in swiotlb.c chokes on DMA_BIDIRECTIONAL mappings. This patch adds the capability to sync those mappings, and optimizes other syncs by accounting for the sync target (i.e. cpu or device) in addition to the DMA direction of the mapping. Signed-off-by: John W. Linville <linville@tuxdriver.com> --- lib/swiotlb.c | 62 +++++++++++++++++++++++++++++++++++++--------------------- 1 files changed, 40 insertions(+), 22 deletions(-) diff --git a/lib/swiotlb.c b/lib/swiotlb.c --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -49,6 +49,14 @@ */ #define IO_TLB_SHIFT 11 +/* + * Enumeration for sync targets + */ +enum dma_sync_target { + SYNC_FOR_CPU = 0, + SYNC_FOR_DEVICE = 1, +}; + int swiotlb_force; /* @@ -295,21 +303,28 @@ unmap_single(struct device *hwdev, char } static void -sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir) +sync_single(struct device *hwdev, char *dma_addr, size_t size, + int dir, int target) { int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT; char *buffer = io_tlb_orig_addr[index]; - /* - * bounce... copy the data back into/from the original buffer - * XXX How do you handle DMA_BIDIRECTIONAL here ? - */ - if (dir == DMA_FROM_DEVICE) - memcpy(buffer, dma_addr, size); - else if (dir == DMA_TO_DEVICE) - memcpy(dma_addr, buffer, size); - else + switch (target) { + case SYNC_FOR_CPU: + if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) + memcpy(buffer, dma_addr, size); + else if (dir != DMA_TO_DEVICE && dir != DMA_NONE) + BUG(); + break; + case SYNC_FOR_DEVICE: + if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) + memcpy(dma_addr, buffer, size); + else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE) + BUG(); + break; + default: BUG(); + } } void * @@ -494,14 +509,14 @@ swiotlb_unmap_single(struct device *hwde */ static inline void swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr, - size_t size, int dir) + size_t size, int dir, int target) { char *dma_addr = phys_to_virt(dev_addr); if (dir == DMA_NONE) BUG(); if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) - sync_single(hwdev, dma_addr, size, dir); + sync_single(hwdev, dma_addr, size, dir, target); else if (dir == DMA_FROM_DEVICE) mark_clean(dma_addr, size); } @@ -510,14 +525,14 @@ void swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr, size_t size, int dir) { - swiotlb_sync_single(hwdev, dev_addr, size, dir); + swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_CPU); } void swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr, size_t size, int dir) { - swiotlb_sync_single(hwdev, dev_addr, size, dir); + swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_DEVICE); } /* @@ -525,14 +540,15 @@ swiotlb_sync_single_for_device(struct de */ static inline void swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr, - unsigned long offset, size_t size, int dir) + unsigned long offset, size_t size, + int dir, int target) { char *dma_addr = phys_to_virt(dev_addr) + offset; if (dir == DMA_NONE) BUG(); if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) - sync_single(hwdev, dma_addr, size, dir); + sync_single(hwdev, dma_addr, size, dir, target); else if (dir == DMA_FROM_DEVICE) mark_clean(dma_addr, size); } @@ -541,14 +557,16 @@ void swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr, unsigned long offset, size_t size, int dir) { - swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir); + swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir, + SYNC_FOR_CPU); } void swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr, unsigned long offset, size_t size, int dir) { - swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir); + swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir, + SYNC_FOR_DEVICE); } /* @@ -627,7 +645,7 @@ swiotlb_unmap_sg(struct device *hwdev, s */ static inline void swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg, - int nelems, int dir) + int nelems, int dir, int target) { int i; @@ -637,21 +655,21 @@ swiotlb_sync_sg(struct device *hwdev, st for (i = 0; i < nelems; i++, sg++) if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) sync_single(hwdev, (void *) sg->dma_address, - sg->dma_length, dir); + sg->dma_length, dir, target); } void swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, int nelems, int dir) { - swiotlb_sync_sg(hwdev, sg, nelems, dir); + swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_CPU); } void swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg, int nelems, int dir) { - swiotlb_sync_sg(hwdev, sg, nelems, dir); + swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_DEVICE); } int ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13 5/6] swiotlb: file header comments 2005-09-12 14:48 ` [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings John W. Linville @ 2005-09-12 14:48 ` John W. Linville 2005-09-12 14:48 ` [patch 2.6.13 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville 2005-09-12 18:51 ` [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings Grant Grundler 1 sibling, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw) To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick Change comment at top of swiotlb.c to reflect that the code is shared with EM64T (i.e. Intel x86_64). Also add an entry for myself so that if I "broke it", everyone knows who "bought it"... :-) Signed-off-by: John W. Linville <linville@tuxdriver.com> --- lib/swiotlb.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-) diff --git a/lib/swiotlb.c b/lib/swiotlb.c --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -1,7 +1,7 @@ /* * Dynamic DMA mapping support. * - * This implementation is for IA-64 platforms that do not support + * This implementation is for IA-64 and EM64T platforms that do not support * I/O TLBs (aka DMA address translation hardware). * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com> * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com> @@ -11,7 +11,9 @@ * 03/05/07 davidm Switch from PCI-DMA to generic device DMA API. * 00/12/13 davidm Rename to swiotlb.c and add mark_clean() to avoid * unnecessary i-cache flushing. - * 04/07/.. ak Better overflow handling. Assorted fixes. + * 04/07/.. ak Better overflow handling. Assorted fixes. + * 05/09/10 linville Add support for syncing ranges, support syncing for + * DMA_BIDIRECTIONAL mappings, miscellaneous cleanup. */ #include <linux/cache.h> ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device} 2005-09-12 14:48 ` [patch 2.6.13 5/6] swiotlb: file header comments John W. Linville @ 2005-09-12 14:48 ` John W. Linville 2005-09-12 15:22 ` Andi Kleen 0 siblings, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw) To: linux-kernel, discuss; +Cc: ak Implement dma_sync_single_range_for_{cpu,device} for x86_64. This makes use of swiotlb_sync_single_range_for_{cpu,device}. Signed-off-by: John W. Linville <linville@tuxdriver.com> --- include/asm-x86_64/dma-mapping.h | 28 ++++++++++++++++++++++++++++ 1 files changed, 28 insertions(+) diff --git a/include/asm-x86_64/dma-mapping.h b/include/asm-x86_64/dma-mapping.h --- a/include/asm-x86_64/dma-mapping.h +++ b/include/asm-x86_64/dma-mapping.h @@ -85,6 +85,34 @@ static inline void dma_sync_single_for_d flush_write_buffers(); } +static inline void dma_sync_single_range_for_cpu(struct device *hwdev, + dma_addr_t dma_handle, + unsigned long offset, + size_t size, int direction) +{ + if (direction == DMA_NONE) + out_of_line_bug(); + + if (swiotlb) + return swiotlb_sync_single_range_for_cpu(hwdev,dma_handle,offset,size,direction); + + flush_write_buffers(); +} + +static inline void dma_sync_single_range_for_device(struct device *hwdev, + dma_addr_t dma_handle, + unsigned long offset, + size_t size, int direction) +{ + if (direction == DMA_NONE) + out_of_line_bug(); + + if (swiotlb) + return swiotlb_sync_single_range_for_device(hwdev,dma_handle,offset,size,direction); + + flush_write_buffers(); +} + static inline void dma_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, int nelems, int direction) ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 2.6.13 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device} 2005-09-12 14:48 ` [patch 2.6.13 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville @ 2005-09-12 15:22 ` Andi Kleen 0 siblings, 0 replies; 20+ messages in thread From: Andi Kleen @ 2005-09-12 15:22 UTC (permalink / raw) To: John W. Linville; +Cc: linux-kernel, discuss On Monday 12 September 2005 16:48, John W. Linville wrote: > Implement dma_sync_single_range_for_{cpu,device} for x86_64. This > makes use of swiotlb_sync_single_range_for_{cpu,device}. I already have the simple patch that just used sync_single_range in my tree and it's scheduled to go to Linus ASAP. You can rebase on that later. -Andi ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings 2005-09-12 14:48 ` [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings John W. Linville 2005-09-12 14:48 ` [patch 2.6.13 5/6] swiotlb: file header comments John W. Linville @ 2005-09-12 18:51 ` Grant Grundler 2005-09-12 19:51 ` John W. Linville 1 sibling, 1 reply; 20+ messages in thread From: Grant Grundler @ 2005-09-12 18:51 UTC (permalink / raw) To: John W. Linville Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick On Mon, Sep 12, 2005 at 10:48:51AM -0400, John W. Linville wrote: ... > + switch (target) { > + case SYNC_FOR_CPU: > + if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)) > + memcpy(buffer, dma_addr, size); > + else if (dir != DMA_TO_DEVICE && dir != DMA_NONE) > + BUG(); > + break; > + case SYNC_FOR_DEVICE: > + if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) > + memcpy(dma_addr, buffer, size); > + else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE) > + BUG(); > + break; > + default: > BUG(); > + } Isn't "DMA_NONE" expected to generate a warning or panic? Documentation/DMA-mapping.txt says: The value PCI_DMA_NONE is to be used for debugging. One can hold this in a data structure before you come to know the precise direction, and this will help catch cases where your direction tracking logic has failed to set things up properly. And it just seems wrong to sync a buffer if no DMA has taking place. ... > @@ -525,14 +540,15 @@ swiotlb_sync_single_for_device(struct de > */ > static inline void > swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr, > - unsigned long offset, size_t size, int dir) > + unsigned long offset, size_t size, > + int dir, int target) > { > char *dma_addr = phys_to_virt(dev_addr) + offset; > > if (dir == DMA_NONE) > BUG(); This existing code seems to support the idea that DMA sync interfaces require the direction be set to something other than DMA_NONE. thanks, grant ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings 2005-09-12 18:51 ` [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings Grant Grundler @ 2005-09-12 19:51 ` John W. Linville 2005-09-12 19:53 ` [patch 2.6.13] swiotlb: BUG() for DMA_NONE in sync_single John W. Linville 0 siblings, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-09-12 19:51 UTC (permalink / raw) To: Grant Grundler Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick On Mon, Sep 12, 2005 at 11:51:20AM -0700, Grant Grundler wrote: > On Mon, Sep 12, 2005 at 10:48:51AM -0400, John W. Linville wrote: > > + case SYNC_FOR_DEVICE: > > + if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) > > + memcpy(dma_addr, buffer, size); > > + else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE) > > + BUG(); > > + break; > > + default: > Isn't "DMA_NONE" expected to generate a warning or panic? True enough...I'll follow-up w/ an additive patch to account for that. As you pointed-out, the higher-level functions in swiotlb filter that out anyway, so this really isn't a big issue. John -- John W. Linville linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13] swiotlb: BUG() for DMA_NONE in sync_single 2005-09-12 19:51 ` John W. Linville @ 2005-09-12 19:53 ` John W. Linville 2005-09-12 20:23 ` Grant Grundler 0 siblings, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-09-12 19:53 UTC (permalink / raw) To: Grant Grundler Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick Call BUG() if DMA_NONE is passed-in as direction for sync_single. Signed-off-by: John W. Linville <linville@tuxdriver.com> --- lib/swiotlb.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/swiotlb.c b/lib/swiotlb.c --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char * case SYNC_FOR_CPU: if (likely(dir == DMA_FROM_DEVICE || dma == DMA_BIDIRECTIONAL)) memcpy(buffer, dma_addr, size); - else if (dir != DMA_TO_DEVICE && dir != DMA_NONE) + else if (dir != DMA_TO_DEVICE) BUG(); break; case SYNC_FOR_DEVICE: if (likely(dir == DMA_TO_DEVICE || dma == DMA_BIDIRECTIONAL)) memcpy(dma_addr, buffer, size); - else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE) + else if (dir != DMA_FROM_DEVICE) BUG(); break; default: -- John W. Linville linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 2.6.13] swiotlb: BUG() for DMA_NONE in sync_single 2005-09-12 19:53 ` [patch 2.6.13] swiotlb: BUG() for DMA_NONE in sync_single John W. Linville @ 2005-09-12 20:23 ` Grant Grundler 2005-09-12 23:45 ` [patch 2.6.13 (take #2)] " John W. Linville 0 siblings, 1 reply; 20+ messages in thread From: Grant Grundler @ 2005-09-12 20:23 UTC (permalink / raw) To: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick On Mon, Sep 12, 2005 at 03:53:56PM -0400, John W. Linville wrote: > Call BUG() if DMA_NONE is passed-in as direction for sync_single. > > Signed-off-by: John W. Linville <linville@tuxdriver.com> Acked-by: Grant Grundler <iod00d@hp.com> John, Sorry - I didn't realize the tests for DMA_NONE I pointed out were now redundant. Can you respin this patch removing the redundant checks for DMA_NONE as well? thanks, grant > --- > > lib/swiotlb.c | 4 ++-- > 1 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/lib/swiotlb.c b/lib/swiotlb.c > --- a/lib/swiotlb.c > +++ b/lib/swiotlb.c > @@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char * > case SYNC_FOR_CPU: > if (likely(dir == DMA_FROM_DEVICE || dma == DMA_BIDIRECTIONAL)) > memcpy(buffer, dma_addr, size); > - else if (dir != DMA_TO_DEVICE && dir != DMA_NONE) > + else if (dir != DMA_TO_DEVICE) > BUG(); > break; > case SYNC_FOR_DEVICE: > if (likely(dir == DMA_TO_DEVICE || dma == DMA_BIDIRECTIONAL)) > memcpy(dma_addr, buffer, size); > - else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE) > + else if (dir != DMA_FROM_DEVICE) > BUG(); > break; > default: > -- > John W. Linville > linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13 (take #2)] swiotlb: BUG() for DMA_NONE in sync_single 2005-09-12 20:23 ` Grant Grundler @ 2005-09-12 23:45 ` John W. Linville 2005-09-12 23:59 ` Grant Grundler 2005-09-13 4:05 ` [discuss] " Andi Kleen 0 siblings, 2 replies; 20+ messages in thread From: John W. Linville @ 2005-09-12 23:45 UTC (permalink / raw) To: Grant Grundler Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick Call BUG() if DMA_NONE is passed-in as direction for sync_single. Also remove unnecessary checks for DMA_NONE in callers of sync_single. Signed-off-by: John W. Linville <linville@tuxdriver.com> --- This patch replaces the previous patch with (almost) the same subject. lib/swiotlb.c | 11 ++--------- 1 files changed, 2 insertions(+), 9 deletions(-) diff --git a/lib/swiotlb.c b/lib/swiotlb.c --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char * case SYNC_FOR_CPU: if (likely(dir == DMA_FROM_DEVICE || dma == DMA_BIDIRECTIONAL)) memcpy(buffer, dma_addr, size); - else if (dir != DMA_TO_DEVICE && dir != DMA_NONE) + else if (dir != DMA_TO_DEVICE) BUG(); break; case SYNC_FOR_DEVICE: if (likely(dir == DMA_TO_DEVICE || dma == DMA_BIDIRECTIONAL)) memcpy(dma_addr, buffer, size); - else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE) + else if (dir != DMA_FROM_DEVICE) BUG(); break; default: @@ -515,8 +515,6 @@ swiotlb_sync_single(struct device *hwdev { char *dma_addr = phys_to_virt(dev_addr); - if (dir == DMA_NONE) - BUG(); if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) sync_single(hwdev, dma_addr, size, dir, target); else if (dir == DMA_FROM_DEVICE) @@ -547,8 +545,6 @@ swiotlb_sync_single_range(struct device { char *dma_addr = phys_to_virt(dev_addr) + offset; - if (dir == DMA_NONE) - BUG(); if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) sync_single(hwdev, dma_addr, size, dir, target); else if (dir == DMA_FROM_DEVICE) @@ -651,9 +647,6 @@ swiotlb_sync_sg(struct device *hwdev, st { int i; - if (dir == DMA_NONE) - BUG(); - for (i = 0; i < nelems; i++, sg++) if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) sync_single(hwdev, (void *) sg->dma_address, -- John W. Linville linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 2.6.13 (take #2)] swiotlb: BUG() for DMA_NONE in sync_single 2005-09-12 23:45 ` [patch 2.6.13 (take #2)] " John W. Linville @ 2005-09-12 23:59 ` Grant Grundler 2005-09-13 4:05 ` [discuss] " Andi Kleen 1 sibling, 0 replies; 20+ messages in thread From: Grant Grundler @ 2005-09-12 23:59 UTC (permalink / raw) To: Grant Grundler, linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick On Mon, Sep 12, 2005 at 07:45:34PM -0400, John W. Linville wrote: > Call BUG() if DMA_NONE is passed-in as direction for sync_single. > Also remove unnecessary checks for DMA_NONE in callers of sync_single. Looks good to me! :^) > Signed-off-by: John W. Linville <linville@tuxdriver.com> In case it matters: ACKed-by: Grant Grundler <iod00d@hp.com> thanks grant > --- > This patch replaces the previous patch with (almost) the same subject. > > lib/swiotlb.c | 11 ++--------- > 1 files changed, 2 insertions(+), 9 deletions(-) > > diff --git a/lib/swiotlb.c b/lib/swiotlb.c > --- a/lib/swiotlb.c > +++ b/lib/swiotlb.c > @@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char * > case SYNC_FOR_CPU: > if (likely(dir == DMA_FROM_DEVICE || dma == DMA_BIDIRECTIONAL)) > memcpy(buffer, dma_addr, size); > - else if (dir != DMA_TO_DEVICE && dir != DMA_NONE) > + else if (dir != DMA_TO_DEVICE) > BUG(); > break; > case SYNC_FOR_DEVICE: > if (likely(dir == DMA_TO_DEVICE || dma == DMA_BIDIRECTIONAL)) > memcpy(dma_addr, buffer, size); > - else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE) > + else if (dir != DMA_FROM_DEVICE) > BUG(); > break; > default: > @@ -515,8 +515,6 @@ swiotlb_sync_single(struct device *hwdev > { > char *dma_addr = phys_to_virt(dev_addr); > > - if (dir == DMA_NONE) > - BUG(); > if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) > sync_single(hwdev, dma_addr, size, dir, target); > else if (dir == DMA_FROM_DEVICE) > @@ -547,8 +545,6 @@ swiotlb_sync_single_range(struct device > { > char *dma_addr = phys_to_virt(dev_addr) + offset; > > - if (dir == DMA_NONE) > - BUG(); > if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) > sync_single(hwdev, dma_addr, size, dir, target); > else if (dir == DMA_FROM_DEVICE) > @@ -651,9 +647,6 @@ swiotlb_sync_sg(struct device *hwdev, st > { > int i; > > - if (dir == DMA_NONE) > - BUG(); > - > for (i = 0; i < nelems; i++, sg++) > if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) > sync_single(hwdev, (void *) sg->dma_address, > -- > John W. Linville > linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [discuss] [patch 2.6.13 (take #2)] swiotlb: BUG() for DMA_NONE in sync_single 2005-09-12 23:45 ` [patch 2.6.13 (take #2)] " John W. Linville 2005-09-12 23:59 ` Grant Grundler @ 2005-09-13 4:05 ` Andi Kleen 1 sibling, 0 replies; 20+ messages in thread From: Andi Kleen @ 2005-09-13 4:05 UTC (permalink / raw) To: discuss Cc: John W. Linville, Grant Grundler, linux-kernel, linux-ia64, tony.luck, Asit.K.Mallick On Tuesday 13 September 2005 01:45, John W. Linville wrote: > Call BUG() if DMA_NONE is passed-in as direction for sync_single. > Also remove unnecessary checks for DMA_NONE in callers of sync_single. > > Signed-off-by: John W. Linville <linville@tuxdriver.com> Hi - your changes look good, but you missed the 2.6.14 merge window now so it'll be all 2.6.15 material. If you think there are any critical bug fixes in there (I didn't there were any) please extract them only. -Andi ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device}
@ 2005-08-29 20:09 John W. Linville
2005-08-29 20:54 ` Andi Kleen
0 siblings, 1 reply; 20+ messages in thread
From: John W. Linville @ 2005-08-29 20:09 UTC (permalink / raw)
To: linux-kernel; +Cc: ak, discuss
Implement dma_sync_single_range_for_{cpu,device}, based on curent
implementations of dma_sync_single_for_{cpu,device}.
Signed-off-by: John W. Linville <linville@tuxdriver.com>
---
It is hard to use this API if common platforms do not implement it. :-)
Hopefully I did not miss something obvious?
This is a naive implementation, so flame away...
include/asm-x86_64/dma-mapping.h | 28 ++++++++++++++++++++++++++++
1 files changed, 28 insertions(+)
diff --git a/include/asm-x86_64/dma-mapping.h b/include/asm-x86_64/dma-mapping.h
--- a/include/asm-x86_64/dma-mapping.h
+++ b/include/asm-x86_64/dma-mapping.h
@@ -85,6 +85,34 @@ static inline void dma_sync_single_for_d
flush_write_buffers();
}
+static inline void dma_sync_single_range_for_cpu(struct device *hwdev,
+ dma_addr_t dma_handle,
+ unsigned long offset,
+ size_t size, int direction)
+{
+ if (direction == DMA_NONE)
+ out_of_line_bug();
+
+ if (swiotlb)
+ return swiotlb_sync_single_for_cpu(hwdev,dma_handle+offset,size,direction);
+
+ flush_write_buffers();
+}
+
+static inline void dma_sync_single_range_for_device(struct device *hwdev,
+ dma_addr_t dma_handle,
+ unsigned long offset,
+ size_t size, int direction)
+{
+ if (direction == DMA_NONE)
+ out_of_line_bug();
+
+ if (swiotlb)
+ return swiotlb_sync_single_for_device(hwdev,dma_handle+offset,size,direction);
+
+ flush_write_buffers();
+}
+
static inline void dma_sync_sg_for_cpu(struct device *hwdev,
struct scatterlist *sg,
int nelems, int direction)
--
John W. Linville
linville@tuxdriver.com
^ permalink raw reply [flat|nested] 20+ messages in thread* Re: [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device} 2005-08-29 20:09 [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville @ 2005-08-29 20:54 ` Andi Kleen 2005-08-29 21:48 ` John W. Linville 0 siblings, 1 reply; 20+ messages in thread From: Andi Kleen @ 2005-08-29 20:54 UTC (permalink / raw) To: John W. Linville; +Cc: linux-kernel, discuss On Monday 29 August 2005 22:09, John W. Linville wrote: > Implement dma_sync_single_range_for_{cpu,device}, based on curent > implementations of dma_sync_single_for_{cpu,device}. Hmm, who or what needs that? It doesn't seem to be documented in Documentation/DMA* and I also don't remember seeing any discussion of it. If it's commonly used it might better to add new swiotlb_* functions that only copy the requested range. -Andi ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device} 2005-08-29 20:54 ` Andi Kleen @ 2005-08-29 21:48 ` John W. Linville 2005-08-30 1:14 ` [discuss] " Andi Kleen 0 siblings, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-08-29 21:48 UTC (permalink / raw) To: Andi Kleen; +Cc: linux-kernel, discuss On Mon, Aug 29, 2005 at 10:54:53PM +0200, Andi Kleen wrote: > On Monday 29 August 2005 22:09, John W. Linville wrote: > > Implement dma_sync_single_range_for_{cpu,device}, based on curent > > implementations of dma_sync_single_for_{cpu,device}. > > Hmm, who or what needs that? It doesn't seem to be documented > in Documentation/DMA* and I also don't remember seeing any > discussion of it. In Documentation/DMA-API.txt it is still referred to as dma_sync_single_range. I imagine the *_for_{cpu,device} stuff got added at about the same time as it did for dma_sync_single, dma_sync_sg, and the like. These calls are implemented for basically all the other arches. And, except for the noted *_for_{cpu,device} discrepancies, these are documented in Documentation/DMA-API.txt. It definitely seems to be an unfortunate omission from include/asm-x86_64/dma-mapping.h. As for who needs it, well, I suppose I do. I want to use that API in a patch I'm working-on. No one will want to merge my patch if it will not compile on x86_64... :-( > If it's commonly used it might better to add new swiotlb_* > functions that only copy the requested range. Perhaps...but I think that sounds more like a discussion of _how_ to implement the API, rather than _whether_ it should be implemented. Using some new variant of the swiotlb_* API might be appropriate for the x86_64 implementation. But, since this is a portable API, I don't think calling the (apparently Intel-specific) swiotlb_* functions would be an appropriate replacement. I'd be happy to have do the implementation differently (or to have someone else do so). Do you have specific suggestions for how to do so? Thanks, John -- John W. Linville linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [discuss] Re: [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device} 2005-08-29 21:48 ` John W. Linville @ 2005-08-30 1:14 ` Andi Kleen 2005-08-30 17:54 ` John W. Linville 0 siblings, 1 reply; 20+ messages in thread From: Andi Kleen @ 2005-08-30 1:14 UTC (permalink / raw) To: discuss; +Cc: John W. Linville, linux-kernel On Monday 29 August 2005 23:48, John W. Linville wrote: > Perhaps...but I think that sounds more like a discussion of _how_ to > implement the API, rather than _whether_ it should be implemented. > Using some new variant of the swiotlb_* API might be appropriate > for the x86_64 implementation. But, since this is a portable API, > I don't think calling the (apparently Intel-specific) swiotlb_* > functions would be an appropriate replacement. What I meant is that instead of the dumb implementation you did it would be better to implement it in swiotlb_* too and copy only the requested byte range there and then call these new functions from the x86-64 wrapper. -Andi ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [discuss] Re: [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device} 2005-08-30 1:14 ` [discuss] " Andi Kleen @ 2005-08-30 17:54 ` John W. Linville 2005-08-30 17:58 ` [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device} John W. Linville 0 siblings, 1 reply; 20+ messages in thread From: John W. Linville @ 2005-08-30 17:54 UTC (permalink / raw) To: Andi Kleen; +Cc: discuss, linux-kernel On Tue, Aug 30, 2005 at 03:14:34AM +0200, Andi Kleen wrote: > On Monday 29 August 2005 23:48, John W. Linville wrote: > > I don't think calling the (apparently Intel-specific) swiotlb_* > > functions would be an appropriate replacement. > > What I meant is that instead of the dumb implementation you did > it would be better to implement it in swiotlb_* too and copy > only the requested byte range there and then call these new > functions from the x86-64 wrapper. Thanks. That is more helpful than the previous message. I was leery of disturbing the swiotlb_* API needlessly, especially since that involves ia64 as well. But if you think that would be better, then I'll work in that direction. Patches to follow... John P.S. BTW, "dumb" is a term that is both subjective and perjorative. IMHO, it is "dumb" to use the word "dumb" in public discourse... -- John W. Linville linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
* [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device} 2005-08-30 17:54 ` John W. Linville @ 2005-08-30 17:58 ` John W. Linville 0 siblings, 0 replies; 20+ messages in thread From: John W. Linville @ 2005-08-30 17:58 UTC (permalink / raw) To: linux-kernel; +Cc: Andi Kleen, discuss, tony.luck, linux-ia64 Add swiotlb_sync_single_range_for_{cpu,device} implementations. This is used to support implementation of dma_sync_single_range_for_{cpu,device} on x86_64. Signed-off-by: John W. Linville <linville@tuxdriver.com> --- arch/ia64/lib/swiotlb.c | 33 +++++++++++++++++++++++++++++++++ include/asm-x86_64/swiotlb.h | 8 ++++++++ 2 files changed, 41 insertions(+) diff --git a/arch/ia64/lib/swiotlb.c b/arch/ia64/lib/swiotlb.c --- a/arch/ia64/lib/swiotlb.c +++ b/arch/ia64/lib/swiotlb.c @@ -522,6 +522,37 @@ swiotlb_sync_single_for_device(struct de } /* + * Same as above, but for a sub-range of the mapping. + */ +void +swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr, + unsigned long offset, size_t size, int dir) +{ + char *dma_addr = phys_to_virt(dev_addr) + offset; + + if (dir == DMA_NONE) + BUG(); + if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) + sync_single(hwdev, dma_addr, size, dir); + else if (dir == DMA_FROM_DEVICE) + mark_clean(dma_addr, size); +} + +void +swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr, + unsigned long offset, size_t size, int dir) +{ + char *dma_addr = phys_to_virt(dev_addr) + offset; + + if (dir == DMA_NONE) + BUG(); + if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end) + sync_single(hwdev, dma_addr, size, dir); + else if (dir == DMA_FROM_DEVICE) + mark_clean(dma_addr, size); +} + +/* * Map a set of buffers described by scatterlist in streaming mode for DMA. * This is the scatter-gather version of the above swiotlb_map_single * interface. Here the scatter gather list elements are each tagged with the @@ -650,6 +681,8 @@ EXPORT_SYMBOL(swiotlb_map_sg); EXPORT_SYMBOL(swiotlb_unmap_sg); EXPORT_SYMBOL(swiotlb_sync_single_for_cpu); EXPORT_SYMBOL(swiotlb_sync_single_for_device); +EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_cpu); +EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device); EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu); EXPORT_SYMBOL(swiotlb_sync_sg_for_device); EXPORT_SYMBOL(swiotlb_dma_mapping_error); diff --git a/include/asm-x86_64/swiotlb.h b/include/asm-x86_64/swiotlb.h --- a/include/asm-x86_64/swiotlb.h +++ b/include/asm-x86_64/swiotlb.h @@ -15,6 +15,14 @@ extern void swiotlb_sync_single_for_cpu( extern void swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr, size_t size, int dir); +extern void swiotlb_sync_single_range_for_cpu(struct device *hwdev, + dma_addr_t dev_addr, + unsigned long offset, + size_t size, int dir); +extern void swiotlb_sync_single_range_for_device(struct device *hwdev, + dma_addr_t dev_addr, + unsigned long offset, + size_t size, int dir); extern void swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg, int nelems, int dir); -- John W. Linville linville@tuxdriver.com ^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2005-09-13 4:05 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-08-30 18:03 [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device} Luck, Tony
2005-08-30 18:09 ` John W. Linville
2005-08-30 18:33 ` [rfc patch] swiotlb: consolidate swiotlb_sync_single_* implementations John W. Linville
2005-08-30 18:40 ` [rfc patch] swiotlb: consolidate swiotlb_sync_sg_* implementations John W. Linville
2005-09-12 14:48 ` [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device} John W. Linville
2005-09-12 14:48 ` [patch 2.6.13 1/6] swiotlb: move from arch/ia64/lib to lib John W. Linville
2005-09-12 14:48 ` [patch 2.6.13 2/6] swiotlb: cleanup some code duplication cruft John W. Linville
2005-09-12 14:48 ` [patch 2.6.13 3/6] swiotlb: support syncing sub-ranges of mappings John W. Linville
2005-09-12 14:48 ` [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings John W. Linville
2005-09-12 14:48 ` [patch 2.6.13 5/6] swiotlb: file header comments John W. Linville
2005-09-12 14:48 ` [patch 2.6.13 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville
2005-09-12 15:22 ` Andi Kleen
2005-09-12 18:51 ` [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings Grant Grundler
2005-09-12 19:51 ` John W. Linville
2005-09-12 19:53 ` [patch 2.6.13] swiotlb: BUG() for DMA_NONE in sync_single John W. Linville
2005-09-12 20:23 ` Grant Grundler
2005-09-12 23:45 ` [patch 2.6.13 (take #2)] " John W. Linville
2005-09-12 23:59 ` Grant Grundler
2005-09-13 4:05 ` [discuss] " Andi Kleen
-- strict thread matches above, loose matches on Subject: below --
2005-08-29 20:09 [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville
2005-08-29 20:54 ` Andi Kleen
2005-08-29 21:48 ` John W. Linville
2005-08-30 1:14 ` [discuss] " Andi Kleen
2005-08-30 17:54 ` John W. Linville
2005-08-30 17:58 ` [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device} John W. Linville
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox