* [PATCH 2/2] swiotlb: allow architectures to override swiotlb pool allocation
@ 2008-10-15 2:08 Jeremy Fitzhardinge
0 siblings, 0 replies; only message in thread
From: Jeremy Fitzhardinge @ 2008-10-15 2:08 UTC (permalink / raw)
To: Ingo Molnar
Cc: Andrew Morton, Joerg Roedel, Jan Beulich, Tony Luck,
FUJITA Tomonori, Linux Kernel Mailing List
Architectures may need to allocate memory specially for use with
the swiotlb. Create the weak function swiotlb_alloc_boot() and
swiotlb_alloc() defaulting to the current behaviour.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Joerg Roedel <joerg.roedel@amd.com>
Cc: Jan Beulich <jbeulich@novell.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
include/linux/swiotlb.h | 3 +++
lib/swiotlb.c | 15 ++++++++++++---
2 files changed, 15 insertions(+), 3 deletions(-)
===================================================================
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,9 @@
#define LINUX__SWIOTLB_H
#include <asm/swiotlb.h>
+
+extern void *swiotlb_alloc_boot(size_t bytes);
+extern void *swiotlb_alloc(unsigned order);
/* SWIOTLB interface */
===================================================================
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -127,6 +127,16 @@
__setup("swiotlb=", setup_io_tlb_npages);
/* make io_tlb_overflow tunable too? */
+void * __weak swiotlb_alloc_boot(size_t size)
+{
+ return alloc_bootmem_low_pages(size);
+}
+
+void * __weak swiotlb_alloc(unsigned order)
+{
+ return (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN, order);
+}
+
/*
* Statically reserve bounce buffer space and initialize bounce buffer data
* structures for the software IO TLB used to implement the DMA API.
@@ -146,7 +156,7 @@
/*
* Get IO TLB memory from the low pages
*/
- io_tlb_start = alloc_bootmem_low_pages(bytes);
+ io_tlb_start = swiotlb_alloc_boot(bytes);
if (!io_tlb_start)
panic("Cannot allocate SWIOTLB buffer");
io_tlb_end = io_tlb_start + bytes;
@@ -203,8 +213,7 @@
bytes = io_tlb_nslabs << IO_TLB_SHIFT;
while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
- io_tlb_start = (char *)__get_free_pages(GFP_DMA | __GFP_NOWARN,
- order);
+ io_tlb_start = swiotlb_alloc(order);
if (io_tlb_start)
break;
order--;
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2008-10-15 2:09 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-10-15 2:08 [PATCH 2/2] swiotlb: allow architectures to override swiotlb pool allocation Jeremy Fitzhardinge
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox