From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: Received: from e06smtp17.uk.ibm.com ([195.75.94.113]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Um2hN-0003Pe-3c for kexec@lists.infradead.org; Mon, 10 Jun 2013 14:03:53 +0000 Received: from /spool/local by e06smtp17.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 10 Jun 2013 14:59:52 +0100 Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by d06dlp01.portsmouth.uk.ibm.com (Postfix) with ESMTP id 71B2617D805C for ; Mon, 10 Jun 2013 15:04:48 +0100 (BST) Received: from d06av09.portsmouth.uk.ibm.com (d06av09.portsmouth.uk.ibm.com [9.149.37.250]) by b06cxnps4075.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r5AE3HhO51707910 for ; Mon, 10 Jun 2013 14:03:17 GMT Received: from d06av09.portsmouth.uk.ibm.com (localhost [127.0.0.1]) by d06av09.portsmouth.uk.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id r5AE3R5k006233 for ; Mon, 10 Jun 2013 08:03:27 -0600 Date: Mon, 10 Jun 2013 16:03:26 +0200 From: Michael Holzheu Subject: Re: [PATCH v5 3/5] vmcore: Introduce remap_oldmem_pfn_range() Message-ID: <20130610160326.2dd4f6fb@holzheu> In-Reply-To: References: <1370624161-2298-1-git-send-email-holzheu@linux.vnet.ibm.com> <1370624161-2298-4-git-send-email-holzheu@linux.vnet.ibm.com> Mime-Version: 1.0 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "kexec" Errors-To: kexec-bounces+dwmw2=twosheds.infradead.org@lists.infradead.org To: HATAYAMA Daisuke Cc: Heiko Carstens , kexec@lists.infradead.org, Jan Willeke , linux-kernel@vger.kernel.org, Martin Schwidefsky , Vivek Goyal On Mon, 10 Jun 2013 22:40:24 +0900 HATAYAMA Daisuke wrote: > > +static int mmap_vmcore_fault(struct vm_area_struct *vma, struct vm_fault *vmf) > > +{ > > + struct address_space *mapping = vma->vm_private_data; > > + pgoff_t index = vmf->pgoff; > > + struct page *page; > > + loff_t src; > > + char *buf; > > + int rc; > > + > > +find_page: > > + page = find_lock_page(mapping, index); > > + if (page) { > > + unlock_page(page); > > + rc = VM_FAULT_MINOR; > > + } else { > > + page = page_cache_alloc_cold(mapping); > > + if (!page) > > + return VM_FAULT_OOM; > > + rc = add_to_page_cache_lru(page, mapping, index, GFP_KERNEL); > > + if (rc) { > > + page_cache_release(page); > > + if (rc == -EEXIST) > > + goto find_page; > > + /* Probably ENOMEM for radix tree node */ > > + return VM_FAULT_OOM; > > + } > > + buf = (void *) (page_to_pfn(page) << PAGE_SHIFT); > > + src = index << PAGE_CACHE_SHIFT; > > + __read_vmcore(buf, PAGE_SIZE, &src, 0); > > + unlock_page(page); > > + rc = VM_FAULT_MAJOR; > > + } > > + vmf->page = page; > > + return rc; > > +} > > How about reusing find_or_create_page()? Hmmm, how would I know then if I have to fill the page with __read_vmcore() or not? Best Regards, Michael _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec