From: Jerome Glisse <j.glisse@gmail.com>
To: Jan Kara <jack@suse.cz>
Cc: linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org,
linux-mm@kvack.org, Ross Zwisler <ross.zwisler@linux.intel.com>,
Dan Williams <dan.j.williams@intel.com>,
linux-nvdimm@lists.01.org, Matthew Wilcox <willy@linux.intel.com>
Subject: Re: [PATCH 17/18] dax: Use radix tree entry lock to protect cow faults
Date: Tue, 19 Apr 2016 07:46:09 -0400 [thread overview]
Message-ID: <20160419114609.GA13932@gmail.com> (raw)
In-Reply-To: <1461015341-20153-18-git-send-email-jack@suse.cz>
On Mon, Apr 18, 2016 at 11:35:40PM +0200, Jan Kara wrote:
> When doing cow faults, we cannot directly fill in PTE as we do for other
> faults as we rely on generic code to do proper accounting of the cowed page.
> We also have no page to lock to protect against races with truncate as
> other faults have and we need the protection to extend until the moment
> generic code inserts cowed page into PTE thus at that point we have no
> protection of fs-specific i_mmap_sem. So far we relied on using
> i_mmap_lock for the protection however that is completely special to cow
> faults. To make fault locking more uniform use DAX entry lock instead.
>
> Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
> fs/dax.c | 12 +++++-------
> include/linux/dax.h | 7 +++++++
> include/linux/mm.h | 7 +++++++
> mm/memory.c | 38 ++++++++++++++++++--------------------
> 4 files changed, 37 insertions(+), 27 deletions(-)
>
[...]
> diff --git a/mm/memory.c b/mm/memory.c
> index 93897f23cc11..f09cdb8d48fa 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -63,6 +63,7 @@
> #include <linux/dma-debug.h>
> #include <linux/debugfs.h>
> #include <linux/userfaultfd_k.h>
> +#include <linux/dax.h>
>
> #include <asm/io.h>
> #include <asm/mmu_context.h>
> @@ -2785,7 +2786,8 @@ oom:
> */
> static int __do_fault(struct vm_area_struct *vma, unsigned long address,
> pgoff_t pgoff, unsigned int flags,
> - struct page *cow_page, struct page **page)
> + struct page *cow_page, struct page **page,
> + void **entry)
> {
> struct vm_fault vmf;
> int ret;
> @@ -2800,8 +2802,10 @@ static int __do_fault(struct vm_area_struct *vma, unsigned long address,
> ret = vma->vm_ops->fault(vma, &vmf);
> if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
> return ret;
> - if (!vmf.page)
> - goto out;
Removing the above sounds seriously bogus to me as it means that below
if (unlikely(PageHWPoison(vmf.page))) could dereference a NULL pointer.
> + if (ret & VM_FAULT_DAX_LOCKED) {
> + *entry = vmf.entry;
> + return ret;
> + }
I see that below you call __do_fault() with NULL for entry, if i am
properly understanding you will never get VM_FAULT_DAX_LOCKED set in
those case so this should be fine but maybe a BUG_ON() might be worth
it.
>
> if (unlikely(PageHWPoison(vmf.page))) {
> if (ret & VM_FAULT_LOCKED)
> @@ -2815,7 +2819,6 @@ static int __do_fault(struct vm_area_struct *vma, unsigned long address,
> else
> VM_BUG_ON_PAGE(!PageLocked(vmf.page), vmf.page);
>
> - out:
> *page = vmf.page;
> return ret;
> }
> @@ -2987,7 +2990,7 @@ static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma,
> pte_unmap_unlock(pte, ptl);
> }
>
> - ret = __do_fault(vma, address, pgoff, flags, NULL, &fault_page);
> + ret = __do_fault(vma, address, pgoff, flags, NULL, &fault_page, NULL);
> if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
> return ret;
>
> @@ -3010,6 +3013,7 @@ static int do_cow_fault(struct mm_struct *mm, struct vm_area_struct *vma,
> pgoff_t pgoff, unsigned int flags, pte_t orig_pte)
> {
> struct page *fault_page, *new_page;
> + void *fault_entry;
> struct mem_cgroup *memcg;
> spinlock_t *ptl;
> pte_t *pte;
> @@ -3027,26 +3031,24 @@ static int do_cow_fault(struct mm_struct *mm, struct vm_area_struct *vma,
> return VM_FAULT_OOM;
> }
>
> - ret = __do_fault(vma, address, pgoff, flags, new_page, &fault_page);
> + ret = __do_fault(vma, address, pgoff, flags, new_page, &fault_page,
> + &fault_entry);
> if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
> goto uncharge_out;
>
> - if (fault_page)
> + if (!(ret & VM_FAULT_DAX_LOCKED))
> copy_user_highpage(new_page, fault_page, address, vma);
Again removing check for non NULL page looks bogus to me, i think there are
still cases where you will get !(ret & VM_FAULT_DAX_LOCKED) and a fault_page
== NULL, for instance from device file mapping. To me it seems that what you
want is fault_page = NULL when VM_FAULT_DAX_LOCKED is set.
> __SetPageUptodate(new_page);
>
> pte = pte_offset_map_lock(mm, pmd, address, &ptl);
> if (unlikely(!pte_same(*pte, orig_pte))) {
> pte_unmap_unlock(pte, ptl);
> - if (fault_page) {
> + if (!(ret & VM_FAULT_DAX_LOCKED)) {
Same as above.
> unlock_page(fault_page);
> put_page(fault_page);
> } else {
> - /*
> - * The fault handler has no page to lock, so it holds
> - * i_mmap_lock for read to protect against truncate.
> - */
> - i_mmap_unlock_read(vma->vm_file->f_mapping);
> + dax_unlock_mapping_entry(vma->vm_file->f_mapping,
> + pgoff);
> }
> goto uncharge_out;
> }
> @@ -3054,15 +3056,11 @@ static int do_cow_fault(struct mm_struct *mm, struct vm_area_struct *vma,
> mem_cgroup_commit_charge(new_page, memcg, false, false);
> lru_cache_add_active_or_unevictable(new_page, vma);
> pte_unmap_unlock(pte, ptl);
> - if (fault_page) {
> + if (!(ret & VM_FAULT_DAX_LOCKED)) {
Again fault_page might be NULL while VM_FAULT_DAX_LOCKED is not set.
> unlock_page(fault_page);
> put_page(fault_page);
> } else {
> - /*
> - * The fault handler has no page to lock, so it holds
> - * i_mmap_lock for read to protect against truncate.
> - */
> - i_mmap_unlock_read(vma->vm_file->f_mapping);
> + dax_unlock_mapping_entry(vma->vm_file->f_mapping, pgoff);
> }
> return ret;
> uncharge_out:
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-04-19 11:46 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-18 21:35 [RFC v3] [PATCH 0/18] DAX page fault locking Jan Kara
2016-04-18 21:35 ` [PATCH 01/18] ext4: Handle transient ENOSPC properly for DAX Jan Kara
2016-04-18 21:35 ` [PATCH 02/18] ext4: Fix race in transient ENOSPC detection Jan Kara
2016-04-18 21:35 ` [PATCH 03/18] DAX: move RADIX_DAX_ definitions to dax.c Jan Kara
2016-04-18 21:35 ` [PATCH 04/18] dax: Remove complete_unwritten argument Jan Kara
2016-04-18 21:35 ` [PATCH 05/18] ext2: Avoid DAX zeroing to corrupt data Jan Kara
2016-04-29 16:30 ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 06/18] dax: Remove dead zeroing code from fault handlers Jan Kara
2016-04-29 16:48 ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 07/18] ext4: Refactor direct IO code Jan Kara
2016-04-18 21:35 ` [PATCH 08/18] ext4: Pre-zero allocated blocks for DAX IO Jan Kara
2016-04-29 18:01 ` Ross Zwisler
2016-05-02 13:09 ` Jan Kara
2016-04-18 21:35 ` [PATCH 09/18] dax: Remove zeroing from dax_io() Jan Kara
2016-04-29 18:56 ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 10/18] dax: Remove pointless writeback from dax_do_io() Jan Kara
2016-04-29 19:00 ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 11/18] dax: Fix condition for filling of PMD holes Jan Kara
2016-04-29 19:08 ` Ross Zwisler
2016-05-02 13:16 ` Jan Kara
2016-04-18 21:35 ` [PATCH 12/18] dax: Remove redundant inode size checks Jan Kara
2016-04-18 21:35 ` [PATCH 13/18] dax: Make huge page handling depend of CONFIG_BROKEN Jan Kara
2016-04-29 19:53 ` Ross Zwisler
2016-05-02 13:19 ` Jan Kara
2016-04-18 21:35 ` [PATCH 14/18] dax: Define DAX lock bit for radix tree exceptional entry Jan Kara
2016-04-29 20:03 ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 15/18] dax: Allow DAX code to replace exceptional entries Jan Kara
2016-04-29 20:29 ` Ross Zwisler
2016-04-18 21:35 ` [PATCH 16/18] dax: New fault locking Jan Kara
2016-04-27 4:27 ` NeilBrown
2016-05-06 4:13 ` Ross Zwisler
2016-05-10 12:27 ` Jan Kara
2016-05-11 19:26 ` Ross Zwisler
2016-05-12 7:58 ` Jan Kara
2016-04-18 21:35 ` [PATCH 17/18] dax: Use radix tree entry lock to protect cow faults Jan Kara
2016-04-19 11:46 ` Jerome Glisse [this message]
2016-04-19 14:33 ` Jan Kara
2016-04-19 15:19 ` Jerome Glisse
2016-04-18 21:35 ` [PATCH 18/18] dax: Remove i_mmap_lock protection Jan Kara
2016-05-06 3:35 ` [RFC v3] [PATCH 0/18] DAX page fault locking Ross Zwisler
2016-05-06 20:33 ` Ross Zwisler
2016-05-09 9:38 ` Jan Kara
2016-05-10 15:28 ` Jan Kara
2016-05-10 20:30 ` Ross Zwisler
2016-05-10 22:39 ` Ross Zwisler
2016-05-11 9:19 ` Jan Kara
2016-05-11 15:52 ` Ross Zwisler
2016-05-09 21:28 ` Verma, Vishal L
2016-05-10 11:52 ` Jan Kara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160419114609.GA13932@gmail.com \
--to=j.glisse@gmail.com \
--cc=dan.j.williams@intel.com \
--cc=jack@suse.cz \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvdimm@lists.01.org \
--cc=ross.zwisler@linux.intel.com \
--cc=willy@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).