From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754246AbbLDQAv (ORCPT ); Fri, 4 Dec 2015 11:00:51 -0500 Received: from g2t2355.austin.hp.com ([15.217.128.54]:39707 "EHLO g2t2355.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753309AbbLDQAt (ORCPT ); Fri, 4 Dec 2015 11:00:49 -0500 Message-ID: <1449248149.9855.85.camel@hpe.com> Subject: Re: [PATCH] mm: Fix mmap MAP_POPULATE for DAX pmd mapping From: Toshi Kani To: Dan Williams Cc: Andrew Morton , "Kirill A. Shutemov" , Matthew Wilcox , Ross Zwisler , mauricio.porto@hpe.com, Linux MM , linux-fsdevel , "linux-nvdimm@lists.01.org" , "linux-kernel@vger.kernel.org" Date: Fri, 04 Dec 2015 09:55:49 -0700 In-Reply-To: References: <1448309082-20851-1-git-send-email-toshi.kani@hpe.com> <1449022764.31589.24.camel@hpe.com> <1449078237.31589.30.camel@hpe.com> <1449084362.31589.37.camel@hpe.com> <1449086521.31589.39.camel@hpe.com> <1449087125.31589.45.camel@hpe.com> <1449092226.31589.50.camel@hpe.com> <1449093339.9855.1.camel@hpe.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.16.5 (3.16.5-3.fc22) Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 2015-12-03 at 15:43 -0800, Dan Williams wrote: > On Wed, Dec 2, 2015 at 1:55 PM, Toshi Kani wrote: > > On Wed, 2015-12-02 at 12:54 -0800, Dan Williams wrote: > > > On Wed, Dec 2, 2015 at 1:37 PM, Toshi Kani > > > wrote: > > > > On Wed, 2015-12-02 at 11:57 -0800, Dan Williams wrote: > > > [..] > > > > > The whole point of __get_user_page_fast() is to avoid the > > > > > overhead of taking the mm semaphore to access the vma. > > > > > _PAGE_SPECIAL simply tells > > > > > __get_user_pages_fast that it needs to fallback to the > > > > > __get_user_pages slow path. > > > > > > > > I see. Then, I think gup_huge_pmd() can simply return 0 when > > > > !pfn_valid(), instead of VM_BUG_ON. > > > > > > Is pfn_valid() a reliable check? It seems to be based on a max_pfn > > > per node... what happens when pmem is located below that point. I > > > haven't been able to convince myself that we won't get false > > > positives, but maybe I'm missing something. > > > > I believe we use the version of pfn_valid() in linux/mmzone.h. > > Talking this over with Dave we came to the conclusion that it would be > safer to be explicit about the pmd not being mapped. He points out > that unless a platform can guarantee that persistent memory is always > section aligned we might get false positive pfn_valid() indications. > Given the get_user_pages_fast() path is arch specific we can simply > have an arch specific pmd bit and not worry about generically enabling > a "pmd special" bit for now. Sounds good to me. Thanks! -Toshi