From: Ross Zwisler <ross.zwisler@linux.intel.com>
To: linux-kernel@vger.kernel.org
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>,
"H. Peter Anvin" <hpa@zytor.com>,
"J. Bruce Fields" <bfields@fieldses.org>,
"Theodore Ts'o" <tytso@mit.edu>,
Alexander Viro <viro@zeniv.linux.org.uk>,
Andreas Dilger <adilger.kernel@dilger.ca>,
Dan Williams <dan.j.williams@intel.com>,
Dave Chinner <david@fromorbit.com>,
Ingo Molnar <mingo@redhat.com>, Jan Kara <jack@suse.com>,
Jeff Layton <jlayton@poochiereds.net>,
Matthew Wilcox <willy@linux.intel.com>,
Thomas Gleixner <tglx@linutronix.de>,
linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, linux-nvdimm@lists.01.org, x86@kernel.org,
xfs@oss.sgi.com, Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <matthew.r.wilcox@intel.com>,
Dave Hansen <dave.hansen@linux.intel.com>
Subject: [PATCH v2 06/11] mm: add pgoff_mkclean()
Date: Fri, 13 Nov 2015 17:06:45 -0700 [thread overview]
Message-ID: <1447459610-14259-7-git-send-email-ross.zwisler@linux.intel.com> (raw)
In-Reply-To: <1447459610-14259-1-git-send-email-ross.zwisler@linux.intel.com>
Introduce pgoff_mkclean() which conceptually is similar to page_mkclean()
except it works in the absence of struct page and it can also be used to
clean PMDs. This is needed for DAX's dirty page handling.
pgoff_mkclean() doesn't return an error for a missing PTE/PMD when looping
through the VMAs because it's not a requirement that each of the
potentially many VMAs associated with a given struct address_space have a
mapping set up for our pgoff.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
---
include/linux/rmap.h | 5 +++++
mm/rmap.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 56 insertions(+)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 29446ae..171a4ac 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -223,6 +223,11 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *);
int page_mkclean(struct page *);
/*
+ * Cleans and write protects the PTEs of shared mappings.
+ */
+void pgoff_mkclean(pgoff_t, struct address_space *);
+
+/*
* called in munlock()/munmap() path to check for other vmas holding
* the page mlocked.
*/
diff --git a/mm/rmap.c b/mm/rmap.c
index f5b5c1f..8114862 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -586,6 +586,16 @@ vma_address(struct page *page, struct vm_area_struct *vma)
return address;
}
+static inline unsigned long
+pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma)
+{
+ unsigned long address;
+
+ address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
+ VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
+ return address;
+}
+
#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
static void percpu_flush_tlb_batch_pages(void *data)
{
@@ -1040,6 +1050,47 @@ int page_mkclean(struct page *page)
}
EXPORT_SYMBOL_GPL(page_mkclean);
+void pgoff_mkclean(pgoff_t pgoff, struct address_space *mapping)
+{
+ struct vm_area_struct *vma;
+ int ret = 0;
+
+ i_mmap_lock_read(mapping);
+ vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
+ struct mm_struct *mm = vma->vm_mm;
+ pmd_t pmd, *pmdp = NULL;
+ pte_t pte, *ptep = NULL;
+ unsigned long address;
+ spinlock_t *ptl;
+
+ address = pgoff_address(pgoff, vma);
+
+ /* when this returns successfully ptl is locked */
+ ret = follow_pte_pmd(mm, address, &ptep, &pmdp, &ptl);
+ if (ret)
+ continue;
+
+ if (pmdp) {
+ flush_cache_page(vma, address, pmd_pfn(*pmdp));
+ pmd = pmdp_huge_clear_flush(vma, address, pmdp);
+ pmd = pmd_wrprotect(pmd);
+ pmd = pmd_mkclean(pmd);
+ set_pmd_at(mm, address, pmdp, pmd);
+ spin_unlock(ptl);
+ } else {
+ BUG_ON(!ptep);
+ flush_cache_page(vma, address, pte_pfn(*ptep));
+ pte = ptep_clear_flush(vma, address, ptep);
+ pte = pte_wrprotect(pte);
+ pte = pte_mkclean(pte);
+ set_pte_at(mm, address, ptep, pte);
+ pte_unmap_unlock(ptep, ptl);
+ }
+ }
+ i_mmap_unlock_read(mapping);
+}
+EXPORT_SYMBOL_GPL(pgoff_mkclean);
+
/**
* page_move_anon_rmap - move a page to our anon_vma
* @page: the page to move to our anon_vma
--
2.1.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-11-14 0:06 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-14 0:06 [PATCH v2 00/11] DAX fsynx/msync support Ross Zwisler
2015-11-14 0:06 ` [PATCH v2 01/11] pmem: add wb_cache_pmem() to the PMEM API Ross Zwisler
2015-11-14 0:06 ` [PATCH v2 02/11] mm: add pmd_mkclean() Ross Zwisler
2015-11-14 1:02 ` Dave Hansen
2015-11-17 17:52 ` Ross Zwisler
2015-11-14 0:06 ` [PATCH v2 03/11] pmem: enable REQ_FUA/REQ_FLUSH handling Ross Zwisler
2015-11-14 0:20 ` Dan Williams
2015-11-14 0:43 ` Andreas Dilger
2015-11-14 2:32 ` Dan Williams
2015-11-16 13:37 ` Jan Kara
2015-11-16 14:05 ` Jan Kara
2015-11-16 17:28 ` Dan Williams
2015-11-16 19:48 ` Ross Zwisler
2015-11-16 20:34 ` Dan Williams
2015-11-16 23:57 ` Ross Zwisler
2015-11-16 22:14 ` Dave Chinner
2015-11-16 23:29 ` Ross Zwisler
2015-11-16 23:42 ` Dave Chinner
2015-11-16 20:09 ` Ross Zwisler
2015-11-18 10:40 ` Jan Kara
2015-11-18 16:16 ` Ross Zwisler
2015-11-14 0:06 ` [PATCH v2 04/11] dax: support dirty DAX entries in radix tree Ross Zwisler
2015-11-14 0:06 ` [PATCH v2 05/11] mm: add follow_pte_pmd() Ross Zwisler
2015-11-14 0:06 ` Ross Zwisler [this message]
2015-11-14 0:06 ` [PATCH v2 07/11] mm: add find_get_entries_tag() Ross Zwisler
2015-11-16 22:42 ` Dave Chinner
2015-11-17 18:08 ` Ross Zwisler
2015-11-14 0:06 ` [PATCH v2 08/11] dax: add support for fsync/sync Ross Zwisler
2015-11-16 22:58 ` Dave Chinner
2015-11-17 18:30 ` Ross Zwisler
2015-11-14 0:06 ` [PATCH v2 09/11] ext2: add support for DAX fsync/msync Ross Zwisler
2015-11-14 0:06 ` [PATCH v2 10/11] ext4: " Ross Zwisler
2015-11-14 0:06 ` [PATCH v2 11/11] xfs: " Ross Zwisler
2015-11-16 23:12 ` Dave Chinner
2015-11-17 19:03 ` Ross Zwisler
2015-11-20 0:37 ` Dave Chinner
2015-11-16 14:41 ` [PATCH v2 00/11] DAX fsynx/msync support Jan Kara
2015-11-16 16:58 ` Dan Williams
2015-11-16 20:01 ` Ross Zwisler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1447459610-14259-7-git-send-email-ross.zwisler@linux.intel.com \
--to=ross.zwisler@linux.intel.com \
--cc=adilger.kernel@dilger.ca \
--cc=akpm@linux-foundation.org \
--cc=bfields@fieldses.org \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@fromorbit.com \
--cc=hpa@zytor.com \
--cc=jack@suse.com \
--cc=jlayton@poochiereds.net \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvdimm@lists.01.org \
--cc=matthew.r.wilcox@intel.com \
--cc=mingo@redhat.com \
--cc=tglx@linutronix.de \
--cc=tytso@mit.edu \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@linux.intel.com \
--cc=x86@kernel.org \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).