From: david.vrabel@citrix.com (David Vrabel)
To: linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [RFC] add a struct page* parameter to dma_map_ops.unmap_page
Date: Mon, 17 Nov 2014 14:43:46 +0000 [thread overview]
Message-ID: <546A09A2.9090704@citrix.com> (raw)
In-Reply-To: <alpine.DEB.2.02.1411111644490.26318@kaball.uk.xensource.com>
On 17/11/14 14:11, Stefano Stabellini wrote:
> Hi all,
> I am writing this email to ask for your advice.
>
> On architectures where dma addresses are different from physical
> addresses, it can be difficult to retrieve the physical address of a
> page from its dma address.
>
> Specifically this is the case for Xen on arm and arm64 but I think that
> other architectures might have the same issue.
>
> Knowing the physical address is necessary to be able to issue any
> required cache maintenance operations when unmap_page,
> sync_single_for_cpu and sync_single_for_device are called.
>
> Adding a struct page* parameter to unmap_page, sync_single_for_cpu and
> sync_single_for_device would make Linux dma handling on Xen on arm and
> arm64 much easier and quicker.
Using an opaque handle instead of struct page * would be more beneficial
for the Intel IOMMU driver. e.g.,
typedef dma_addr_t dma_handle_t;
dma_handle_t dma_map_single(struct device *dev,
void *va, size_t size,
enum dma_data_direction dir);
void dma_unmap_single(struct device *dev,
dma_handle_t handle, size_t size,
enum dma_data_direction dir);
etc.
Drivers would then use:
dma_addr_t dma_addr(dma_handle_t handle);
To obtain the bus address from the handle.
> I think that other drivers have similar problems, such as the Intel
> IOMMU driver having to call find_iova and walking down an rbtree to get
> the physical address in its implementation of unmap_page.
>
> Callers have the struct page* in their hands already from the previous
> map_page call so it shouldn't be an issue for them. A problem does
> exist however: there are about 280 callers of dma_unmap_page and
> pci_unmap_page. We have even more callers of the dma_sync_single_for_*
> functions.
You will also need to fix dma_unmap_single() and pci_unmap_single()
(another 1000+ callers).
You may need to consider a parallel set of map/unmap API calls that
return/accept a handle, and then converting drivers one-by-one as
required, instead of trying to convert every single driver at once.
David
next prev parent reply other threads:[~2014-11-17 14:43 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-11-17 14:11 [RFC] add a struct page* parameter to dma_map_ops.unmap_page Stefano Stabellini
2014-11-17 14:43 ` David Vrabel [this message]
2014-11-21 11:48 ` Stefano Stabellini
2014-11-21 20:18 ` Mitchel Humpherys
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=546A09A2.9090704@citrix.com \
--to=david.vrabel@citrix.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).