From: Zach Pfeffer <zpfeffer@codeaurora.org>
To: Andi Kleen <andi@firstfloor.org>
Cc: Daniel Walker <dwalker@codeaurora.org>,
Randy Dunlap <randy.dunlap@oracle.com>,
mel@csn.ul.ie, linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-arm-msm@vger.kernel.org, linux-omap@vger.kernel.org,
linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC 3/3] mm: iommu: The Virtual Contiguous Memory Manager
Date: Fri, 02 Jul 2010 11:42:57 -0700 [thread overview]
Message-ID: <4C2E3331.3090405@codeaurora.org> (raw)
In-Reply-To: <20100702082225.GA12221@basil.fritz.box>
Andi Kleen wrote:
> On Thu, Jul 01, 2010 at 11:17:34PM -0700, Zach Pfeffer wrote:
>> Andi Kleen wrote:
>>>> The VCMM provides a more abstract, global view with finer-grained
>>>> control of each mapping a user wants to create. For instance, the
>>>> semantics of iommu_map preclude its use in setting up just the IOMMU
>>>> side of a mapping. With a one-sided map, two IOMMU devices can be
>>> Hmm? dma_map_* does not change any CPU mappings. It only sets up
>>> DMA mapping(s).
>> Sure, but I was saying that iommu_map() doesn't just set up the IOMMU
>> mappings, its sets up both the iommu and kernel buffer mappings.
>
> Normally the data is already in the kernel or mappings, so why
> would you need another CPU mapping too? Sometimes the CPU
> code has to scatter-gather, but that is considered acceptable
> (and if it really cannot be rewritten to support sg it's better
> to have an explicit vmap operation)
>
> In general on larger systems with many CPUs changing CPU mappings
> also gets expensive (because you have to communicate with all cores),
> and is not a good idea on frequent IO paths.
That's all true, but what a VCMM allows is for these trade-offs to be
made by the user for future systems. It may not be too expensive to
change the IO path around on future chips or the user may be okay with
the performance penalty. A VCMM doesn't enforce a policy on the user,
it lets the user make their own policy.
>>>> Additionally, the current IOMMU interface does not allow users to
>>>> associate one page table with multiple IOMMUs unless the user explicitly
>>> That assumes that all the IOMMUs on the system support the same page table
>>> format, right?
>> Actually no. Since the VCMM abstracts a page-table as a Virtual
>> Contiguous Region (VCM) a VCM can be associated with any device,
>> regardless of their individual page table format.
>
> But then there is no real page table sharing, isn't it?
> The real information should be in the page tables, nowhere else.
Yeah, and the implementation ensures that it. The VCMM just adds a few
fields like start_addr, len and the device. The device still manages
the its page-tables.
>>> The standard Linux approach to such a problem is to write
>>> a library that drivers can use for common functionality, not put a middle
>>> layer in between. Libraries are much more flexible than layers.
>> That's true up to the, "is this middle layer so useful that its worth
>> it" point. The VM is a middle layer, you could make the same argument
>> about it, "the mapping code isn't too hard, just map in the memory
>> that you need and be done with it". But the VM middle layer provides a
>> clean separation between page frames and pages which turns out to be
>
> Actually we use both PFNs and struct page *s in many layers up
> and down, there's not really any layering in that.
Sure, but the PFNs and the struct page *s are the middle layer. Its
just that things haven't been layered on top of them. A VCMM is the
higher level abstraction, since it allows the size of the PFs to vary
and the consumers of the VCM's to be determined at run-time.
--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-07-02 18:43 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-30 5:55 [RFC 1/3] mm: iommu: An API to unify IOMMU, CPU and device memory management Zach Pfeffer
2010-06-30 5:55 ` [RFC 2/3] mm: iommu: A physical allocator for the VCMM Zach Pfeffer
2010-06-30 5:55 ` [RFC 3/3] mm: iommu: The Virtual Contiguous Memory Manager Zach Pfeffer
2010-07-01 17:17 ` Randy Dunlap
2010-07-01 18:02 ` Andi Kleen
2010-07-01 19:28 ` Daniel Walker
2010-07-01 19:38 ` Andi Kleen
2010-07-01 19:42 ` Daniel Walker
2010-07-01 22:15 ` Hari Kanigeri
2010-07-02 7:09 ` Zach Pfeffer
2010-07-10 14:36 ` Joerg Roedel
2010-07-13 5:21 ` Zach Pfeffer
2010-07-14 19:34 ` Joerg Roedel
2010-07-15 1:18 ` Zach Pfeffer
2010-07-01 22:00 ` Zach Pfeffer
2010-07-01 22:05 ` Daniel Walker
2010-07-02 7:33 ` Zach Pfeffer
2010-07-10 14:56 ` Joerg Roedel
2010-07-13 5:46 ` Zach Pfeffer
2010-07-13 5:59 ` FUJITA Tomonori
2010-07-13 8:20 ` Alan Cox
2010-07-13 8:30 ` FUJITA Tomonori
2010-07-13 8:42 ` Alan Cox
2010-07-13 8:45 ` FUJITA Tomonori
2010-07-13 9:02 ` Russell King - ARM Linux
2010-07-14 1:59 ` FUJITA Tomonori
2010-07-22 3:50 ` Zach Pfeffer
2010-07-22 4:47 ` FUJITA Tomonori
2010-07-22 16:33 ` Zach Pfeffer
2010-07-22 7:51 ` Russell King - ARM Linux
2010-07-22 16:13 ` Zach Pfeffer
2010-07-13 12:06 ` Zach Pfeffer
2010-07-01 22:51 ` Hari Kanigeri
2010-07-02 7:29 ` Zach Pfeffer
2010-07-01 23:00 ` Andi Kleen
2010-07-02 6:17 ` Zach Pfeffer
2010-07-02 8:22 ` Andi Kleen
2010-07-02 18:42 ` Zach Pfeffer [this message]
2010-07-10 15:11 ` Joerg Roedel
2010-07-13 5:52 ` Zach Pfeffer
2010-07-03 6:36 ` Zach Pfeffer
2010-07-10 14:54 ` Joerg Roedel
2010-07-13 5:27 ` Zach Pfeffer
2010-07-01 20:59 ` Paul Walmsley
2010-07-01 21:48 ` Randy Dunlap
2010-06-30 23:40 ` [RFC 1/3] mm: iommu: An API to unify IOMMU, CPU and device memory management Randy Dunlap
2010-07-01 7:16 ` Zach Pfeffer
2010-07-02 7:00 ` Paul Mundt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C2E3331.3090405@codeaurora.org \
--to=zpfeffer@codeaurora.org \
--cc=andi@firstfloor.org \
--cc=dwalker@codeaurora.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-omap@vger.kernel.org \
--cc=mel@csn.ul.ie \
--cc=randy.dunlap@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).