linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: linux@arm.linux.org.uk (Russell King - ARM Linux)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH] OMAP: iommu flush page table entries from L1 and L2 cache
Date: Thu, 28 Apr 2011 14:40:55 +0100	[thread overview]
Message-ID: <20110428134055.GA19709@n2100.arm.linux.org.uk> (raw)
In-Reply-To: <BANLkTin8gugnVoai6VHC7qMgK4eKRXRtHg@mail.gmail.com>

On Fri, Apr 15, 2011 at 06:26:40AM -0500, Gupta, Ramesh wrote:
> Russell,
> 
> On Thu, Apr 14, 2011 at 5:30 PM, Russell King - ARM Linux
> <linux@arm.linux.org.uk> wrote:
> > On Thu, Apr 14, 2011 at 04:52:48PM -0500, Fernando Guzman Lugo wrote:
> >> From: Ramesh Gupta <grgupta@ti.com>
> >>
> >> This patch is to flush the iommu page table entries from L1 and L2
> >> caches using dma_map_single. This also simplifies the implementation
> >> by removing the functions ?flush_iopgd_range/flush_iopte_range.
> >
> > No. ?This usage is just wrong. ?If you're going to use the DMA API then
> > unmap it, otherwise the DMA API debugging will go awol.
> >
> 
> Thank you for the comments, this particular memory is always a write
> from the A9 for MMU programming and
> only read from the slave processor, that is the reason for not calling
> the unmap. I can re-look into the changes to call
> unmap in a proper way as this impacts the DMA API.
> Are there any other ways to perform only flush the memory from L1/L2 caches?

We _could_ invent a new API to deal with this, which is probably going
to be far better in the longer term for page table based iommus.  That's
going to need some thought - eg, do we need to pass a struct device
argument for the iommu cache flushing so we know whether we need to flush
or not (eg, if we have cache coherent iommus)...

  reply	other threads:[~2011-04-28 13:40 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-14 21:52 [PATCH] OMAP: iommu flush page table entries from L1 and L2 cache Fernando Guzman Lugo
2011-04-14 22:30 ` Russell King - ARM Linux
2011-04-15  2:24   ` KyongHo Cho
2011-04-15  8:12     ` Russell King - ARM Linux
2011-04-15 11:26   ` Gupta, Ramesh
2011-04-28 13:40     ` Russell King - ARM Linux [this message]
2011-04-28 16:48       ` Gupta, Ramesh
2011-08-11 19:28         ` Gupta, Ramesh
2011-08-11 22:29           ` Russell King - ARM Linux
2011-08-12 16:05             ` Gupta, Ramesh
2011-10-16 18:32               ` C.A, Subramaniam
2012-05-29 15:53               ` Gupta, Ramesh
2011-04-18  7:29   ` Arnd Bergmann
2011-04-18 11:05     ` Tony Lindgren
2011-04-18 11:42       ` Hiroshi DOYU
2011-04-18 13:25         ` Arnd Bergmann
2011-04-18 11:58       ` Arnd Bergmann
2011-04-18 12:55         ` Kyungmin Park
2011-04-18 14:13           ` Arnd Bergmann
2011-04-19  9:11             ` Kyungmin Park
2011-04-19 12:01               ` Arnd Bergmann
2011-04-19 12:35                 ` Kyungmin Park
2011-04-19 13:02                   ` Russell King - ARM Linux
2011-04-19 13:11                   ` Arnd Bergmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110428134055.GA19709@n2100.arm.linux.org.uk \
    --to=linux@arm.linux.org.uk \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).