public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Catalin Marinas <catalin.marinas@arm.com>
To: "David S. Miller" <davem@davemloft.net>
Cc: rmk+lkml@arm.linux.org.uk, linux-kernel@vger.kernel.org
Subject: Re: 2.6.13-rc3: cache flush missing from somewhere
Date: Mon, 01 Aug 2005 17:34:19 +0100	[thread overview]
Message-ID: <tnxirypboqc.fsf@arm.com> (raw)
In-Reply-To: <20050801.083505.88343974.davem@davemloft.net> (David S. Miller's message of "Mon, 01 Aug 2005 08:35:05 -0700 (PDT)")

"David S. Miller" <davem@davemloft.net> wrote:
> The "lazy dcache flushing" he mentioned only flushes on the
> processor where the store occurred, not on any other cpus.
>
> He took the sparc64 code which, at the time of the flush_dcache_page()
> call, stores the current cpu number in the page->flags and sets a
> bit indicating a flush is needed.  When some condition occurs
> requiring the delayed flush to occur, we look at the cpu number
> in the page and ask that specific cpu to do the flush.

That's a point I missed. The D-cache flushing should take place on the
CPU that wrote the page, not the one that got the page fault (and the
I-cache invalidation on all the CPUs). I don't see why this wouldn't
work.

> I've seen implementations where the I-cache does not snoop local cpu
> stores, but I've never seen one where other cpus do not snoop such
> stores.

On this ARM SMP implementation, only the D-cache snoops the other CPUs
stores, not the I-cache.

> You _HAVE_ to implement handling of I-cache update on L2
> cache line changes to handle updates from devices doing DMA, so why
> in the world special case stores done by other cpus?
>
> It almost sounds impossible to implement this and have the I-cache
> be coherent wrt. DMA transactions.

Shouldn't flush_dcache_page() be called anyway when a page is modified
by the kernel (or by a device via DMA)? With the Harvard cache
architecture in ARM, the I cache should be invalidated even in a
uniprocessor system. For SMP it is just a matter of invalidating it on
all the CPUs (done by issuing an inter-processor interrupt).

> Do you have to flush the whole I-cache every time some device DMAs
> a page into memory, before you can execute instructions out of it?

IMHO, it only needs invalidating the I-cache corresponding to the
DMA'ed page, not the whole I-cache but, as I said, this should be
handled by flush_dcache_page, whether lazily or not.

Thanks,

-- 
Catalin


  reply	other threads:[~2005-08-01 16:35 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-07-29 15:13 2.6.13-rc3: cache flush missing from somewhere Russell King
2005-07-30 19:40 ` David S. Miller
2005-07-30 20:08   ` Russell King
2005-07-31  0:09     ` David S. Miller
2005-08-01 12:24   ` Catalin Marinas
2005-08-01 15:35     ` David S. Miller
2005-08-01 16:34       ` Catalin Marinas [this message]
2005-08-01 16:40     ` Russell King
2005-08-01 16:54       ` Catalin Marinas
2005-08-01 17:01         ` Russell King
2005-08-01 18:37           ` Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tnxirypboqc.fsf@arm.com \
    --to=catalin.marinas@arm.com \
    --cc=davem@davemloft.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rmk+lkml@arm.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox