linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dan Williams <dan.j.williams@intel.com>
To: Bob Liu <lliubbo@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Dave Hansen <dave@sr71.net>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	Linux-MM <linux-mm@kvack.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Matthew Wilcox <willy@linux.intel.com>,
	Ross Zwisler <ross.zwisler@linux.intel.com>,
	Logan Gunthorpe <logang@deltatee.com>
Subject: Re: [-mm PATCH v4 14/18] mm, dax, pmem: introduce {get|put}_dev_pagemap() for dax-gup
Date: Sun, 27 Dec 2015 11:02:44 -0800	[thread overview]
Message-ID: <CAPcyv4jvE3bB=oqjFRquHWV8f_o2XOX1oB9h_xMzA82sYVMOVQ@mail.gmail.com> (raw)
In-Reply-To: <CAA_GA1cYJhpqJkceYzJBpUj9Uvr68zGZzmphopu+7U+dEqCN3w@mail.gmail.com>

On Sun, Dec 27, 2015 at 12:46 AM, Bob Liu <lliubbo@gmail.com> wrote:
> On Mon, Dec 21, 2015 at 1:45 PM, Dan Williams <dan.j.williams@intel.com> wrote:
>> get_dev_page() enables paths like get_user_pages() to pin a dynamically
>> mapped pfn-range (devm_memremap_pages()) while the resulting struct page
>> objects are in use.  Unlike get_page() it may fail if the device is, or
>> is in the process of being, disabled.  While the initial lookup of the
>> range may be an expensive list walk, the result is cached to speed up
>> subsequent lookups which are likely to be in the same mapped range.
>>
>> devm_memremap_pages() now requires a reference counter to be specified
>> at init time.  For pmem this means moving request_queue allocation into
>> pmem_alloc() so the existing queue usage counter can track "device
>> pages".
>>
>> ZONE_DEVICE pages always have an elevated count and will never be on an
>> lru reclaim list.  That space in 'struct page' can be redirected for
>> other uses, but for safety introduce a poison value that will always
>> trip __list_add() to assert.  This allows half of the struct list_head
>> storage to be reclaimed with some assurance to back up the assumption
>> that the page count never goes to zero and a list_add() is never
>> attempted.
>>
>> Cc: Dave Hansen <dave@sr71.net>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Matthew Wilcox <willy@linux.intel.com>
>> Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
>> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
>> Tested-by: Logan Gunthorpe <logang@deltatee.com>
>> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
>> ---
[..]
>> +static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
>> +               struct dev_pagemap *pgmap)
>> +{
>> +       const struct resource *res = pgmap ? pgmap->res : NULL;
>> +       resource_size_t phys = PFN_PHYS(pfn);
>> +
>> +       /*
>> +        * In the cached case we're already holding a live reference so
>> +        * we can simply do a blind increment
>> +        */
>> +       if (res && phys >= res->start && phys <= res->end) {
>> +               percpu_ref_get(pgmap->ref);
>> +               return pgmap;
>> +       }
>> +
>> +       /* fall back to slow path lookup */
>> +       rcu_read_lock();
>> +       pgmap = find_dev_pagemap(phys);
>
> Is it possible just use pfn_to_page() and then return page->pgmap?
> Then we can get rid of the pgmap_radix tree totally.

No, for two reasons:

1/ find_dev_pagemap() is used in places where pfn_to_page() is not yet
established (see: to_vmem_altmap())

2/ at shutdown, new get_dev_pagemap() requests can race the memmap
being torn down.  So, unless we already have a reference against the
page_map, we always need to look it up under a lock to know that
pfn_to_page() is returning a valid page.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2015-12-27 19:02 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-21  5:44 [-mm PATCH v4 00/18] get_user_pages() for dax pte and pmd mappings Dan Williams
2015-12-21  5:44 ` [-mm PATCH v4 01/18] kvm: rename pfn_t to kvm_pfn_t Dan Williams
2015-12-21  5:44 ` [-mm PATCH v4 02/18] mm, dax, pmem: introduce pfn_t Dan Williams
2015-12-21  5:44 ` [-mm PATCH v4 03/18] mm: skip memory block registration for ZONE_DEVICE Dan Williams
2015-12-21  5:44 ` [-mm PATCH v4 04/18] mm: introduce find_dev_pagemap() Dan Williams
2015-12-21  5:44 ` [-mm PATCH v4 05/18] x86, mm: introduce vmem_altmap to augment vmemmap_populate() Dan Williams
2015-12-27  8:40   ` Bob Liu
2015-12-21  5:44 ` [-mm PATCH v4 06/18] libnvdimm, pfn, pmem: allocate memmap array in persistent memory Dan Williams
2015-12-21  5:44 ` [-mm PATCH v4 07/18] avr32: convert to asm-generic/memory_model.h Dan Williams
2015-12-21  5:44 ` [-mm PATCH v4 08/18] hugetlb: fix compile error on tile Dan Williams
2015-12-21  5:44 ` [-mm PATCH v4 09/18] frv: fix compiler warning from definition of __pmd() Dan Williams
2015-12-21  5:45 ` [-mm PATCH v4 10/18] x86, mm: introduce _PAGE_DEVMAP Dan Williams
2015-12-21  5:45 ` [-mm PATCH v4 11/18] mm, dax, gpu: convert vm_insert_mixed to pfn_t Dan Williams
2015-12-21  5:45 ` [-mm PATCH v4 12/18] mm, dax: convert vmf_insert_pfn_pmd() " Dan Williams
2015-12-21  5:45 ` [-mm PATCH v4 13/18] libnvdimm, pmem: move request_queue allocation earlier in probe Dan Williams
2015-12-21  5:45 ` [-mm PATCH v4 14/18] mm, dax, pmem: introduce {get|put}_dev_pagemap() for dax-gup Dan Williams
2015-12-27  8:46   ` Bob Liu
2015-12-27 19:02     ` Dan Williams [this message]
2015-12-21  5:45 ` [-mm PATCH v4 15/18] mm, dax: dax-pmd vs thp-pmd vs hugetlbfs-pmd Dan Williams
2015-12-25  0:59   ` [-mm PATCH v5 " Dan Williams
2015-12-25  1:11     ` Sasha Levin
2015-12-30  5:32   ` [-mm PATCH v4 " Williams, Dan J
2015-12-21  5:45 ` [-mm PATCH v4 16/18] mm, x86: get_user_pages() for dax mappings Dan Williams
2015-12-25  1:03   ` [-mm PATCH v5 " Dan Williams
2015-12-21  5:45 ` [-mm PATCH v4 17/18] dax: provide diagnostics for pmd mapping failures Dan Williams
2015-12-21  5:45 ` [-mm PATCH v4 18/18] dax: re-enable dax pmd mappings Dan Williams
2015-12-27  8:33 ` [-mm PATCH v4 00/18] get_user_pages() for dax pte and " Bob Liu
2015-12-27 18:55   ` Dan Williams
2015-12-29  3:23     ` Bob Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPcyv4jvE3bB=oqjFRquHWV8f_o2XOX1oB9h_xMzA82sYVMOVQ@mail.gmail.com' \
    --to=dan.j.williams@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave@sr71.net \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=lliubbo@gmail.com \
    --cc=logang@deltatee.com \
    --cc=ross.zwisler@linux.intel.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).