From: toshi.kani@hpe.com (Kani, Toshi)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v2 2/2] x86/mm: implement free pmd/pte page interfaces
Date: Thu, 26 Apr 2018 22:30:14 +0000 [thread overview]
Message-ID: <1524781764.2693.503.camel@hpe.com> (raw)
In-Reply-To: <20180426200737.GS15462@8bytes.org>
On Thu, 2018-04-26 at 22:07 +0200, joro at 8bytes.org wrote:
> On Thu, Apr 26, 2018 at 05:49:58PM +0000, Kani, Toshi wrote:
> > On Thu, 2018-04-26 at 19:23 +0200, joro at 8bytes.org wrote:
> > > So the PMD entry you clear can still be in a page-walk cache and this
> > > needs to be flushed too before you can free the PTE page. Otherwise
> > > page-walks might still go to the page you just freed. That is especially
> > > bad when the page is already reallocated and filled with other data.
> >
> > I do not understand why we need to flush processor caches here. x86
> > processor caches are coherent with MESI. So, clearing an PMD entry
> > modifies a cache entry on the processor associated with the address,
> > which in turn invalidates all stale cache entries on other processors.
>
> A page walk cache is not about the processors data cache, its a cache
> similar to the TLB to speed up page-walks by caching intermediate
> results of previous page walks.
Thanks for the clarification. After reading through SDM one more time, I
agree that we need a TLB purge here. Here is my current understanding.
- INVLPG purges both TLB and paging-structure caches. So, PMD cache was
purged once.
- However, processor may cache this PMD entry later in speculation
since it has p-bit set. (This is where my misunderstanding was.
Speculation is not allowed to access a target address, but it may still
cache this PMD entry.)
- A single INVLPG on each processor purges this PMD cache. It does not
need a range purge (which was already done).
Does it sound right to you?
As for the BUG_ON issue, are you able to reproduce this issue? If so,
would you be able to test the fix?
Regards,
-Toshi
next prev parent reply other threads:[~2018-04-26 22:30 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-14 18:01 [PATCH v2 0/2] fix memory leak / panic in ioremap huge pages Toshi Kani
2018-03-14 18:01 ` [PATCH v2 1/2] mm/vmalloc: Add interfaces to free unmapped page table Toshi Kani
2018-03-14 22:38 ` Andrew Morton
2018-03-15 14:27 ` Kani, Toshi
2018-03-14 18:01 ` [PATCH v2 2/2] x86/mm: implement free pmd/pte page interfaces Toshi Kani
2018-03-15 7:39 ` Chintan Pandya
2018-03-15 14:51 ` Kani, Toshi
2018-04-26 14:19 ` Joerg Roedel
2018-04-26 16:21 ` Kani, Toshi
2018-04-26 17:23 ` joro at 8bytes.org
2018-04-26 17:49 ` Kani, Toshi
2018-04-26 20:07 ` joro at 8bytes.org
2018-04-26 22:30 ` Kani, Toshi [this message]
2018-04-27 7:37 ` joro at 8bytes.org
2018-04-27 11:39 ` Michal Hocko
2018-04-27 11:46 ` joro at 8bytes.org
2018-04-27 11:52 ` Chintan Pandya
2018-04-27 12:48 ` joro at 8bytes.org
2018-04-27 13:42 ` Chintan Pandya
2018-04-27 14:31 ` Kani, Toshi
2018-04-28 9:02 ` joro at 8bytes.org
2018-04-28 20:54 ` Kani, Toshi
2018-04-30 7:30 ` Chintan Pandya
2018-04-30 13:43 ` Kani, Toshi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1524781764.2693.503.camel@hpe.com \
--to=toshi.kani@hpe.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).