From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50EF6C4363D for ; Tue, 22 Sep 2020 14:58:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6EF322395C for ; Tue, 22 Sep 2020 14:58:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6EF322395C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D0AEB90002A; Tue, 22 Sep 2020 10:58:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C946790000F; Tue, 22 Sep 2020 10:58:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B828990002A; Tue, 22 Sep 2020 10:58:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id A23ED90000F for ; Tue, 22 Sep 2020 10:58:24 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5DBCC8249980 for ; Tue, 22 Sep 2020 14:58:24 +0000 (UTC) X-FDA: 77291003328.27.scale27_0d17c1b2714e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 34A933D668 for ; Tue, 22 Sep 2020 14:58:24 +0000 (UTC) X-HE-Tag: scale27_0d17c1b2714e X-Filterd-Recvd-Size: 7030 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Sep 2020 14:58:23 +0000 (UTC) Received: by verein.lst.de (Postfix, from userid 2407) id 0D7F667373; Tue, 22 Sep 2020 16:58:20 +0200 (CEST) Date: Tue, 22 Sep 2020 16:58:19 +0200 From: Christoph Hellwig To: boris.ostrovsky@oracle.com Cc: Christoph Hellwig , Andrew Morton , Peter Zijlstra , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Minchan Kim , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: Re: [PATCH 6/6] x86/xen: open code alloc_vm_area in arch_gnttab_valloc Message-ID: <20200922145819.GA28420@lst.de> References: <20200918163724.2511-1-hch@lst.de> <20200918163724.2511-7-hch@lst.de> <0833b9a8-5096-d105-a850-1336150eada1@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <0833b9a8-5096-d105-a850-1336150eada1@oracle.com> User-Agent: Mutt/1.5.17 (2007-11-01) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Sep 21, 2020 at 04:44:10PM -0400, boris.ostrovsky@oracle.com wrot= e: > This will end up incrementing area->ptes pointer. So perhaps something = like >=20 >=20 > pte_t **ptes =3D area->ptes; >=20 > if (apply_to_page_range(&init_mm, (unsigned long)area->area->addr, > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 P= AGE_SIZE * nr_frames, gnttab_apply, &ptes)) { >=20 > =A0=A0=A0=A0=A0=A0 ... Yeah. What do you think of this version? I think it is a little cleaner and matches what xenbus does. At this point it probably should be split into a Xen and a alloc_vm_area removal patch, though. --- >From 74d6b797e049f72b5e9f63f14da6321c4209a792 Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Wed, 16 Sep 2020 16:09:42 +0200 Subject: x86/xen: open code alloc_vm_area in arch_gnttab_valloc Open code alloc_vm_area in the last remaining caller. Signed-off-by: Christoph Hellwig --- arch/x86/xen/grant-table.c | 27 +++++++++++++++------ include/linux/vmalloc.h | 5 +--- mm/nommu.c | 7 ------ mm/vmalloc.c | 48 -------------------------------------- 4 files changed, 21 insertions(+), 66 deletions(-) diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c index 4988e19598c8a5..1e681bf62561a0 100644 --- a/arch/x86/xen/grant-table.c +++ b/arch/x86/xen/grant-table.c @@ -25,6 +25,7 @@ static struct gnttab_vm_area { struct vm_struct *area; pte_t **ptes; + int idx; } gnttab_shared_vm_area, gnttab_status_vm_area; =20 int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gfram= es, @@ -90,19 +91,31 @@ void arch_gnttab_unmap(void *shared, unsigned long nr= _gframes) } } =20 +static int gnttab_apply(pte_t *pte, unsigned long addr, void *data) +{ + struct gnttab_vm_area *area =3D data; + + area->ptes[area->idx++] =3D pte; + return 0; +} + static int arch_gnttab_valloc(struct gnttab_vm_area *area, unsigned nr_f= rames) { area->ptes =3D kmalloc_array(nr_frames, sizeof(*area->ptes), GFP_KERNEL= ); if (area->ptes =3D=3D NULL) return -ENOMEM; - - area->area =3D alloc_vm_area(PAGE_SIZE * nr_frames, area->ptes); - if (area->area =3D=3D NULL) { - kfree(area->ptes); - return -ENOMEM; - } - + area->area =3D get_vm_area(PAGE_SIZE * nr_frames, VM_IOREMAP); + if (!area->area) + goto out_free_ptes; + if (apply_to_page_range(&init_mm, (unsigned long)area->area->addr, + PAGE_SIZE * nr_frames, gnttab_apply, area)) + goto out_free_vm_area; return 0; +out_free_vm_area: + free_vm_area(area->area); +out_free_ptes: + kfree(area->ptes); + return -ENOMEM; } =20 static void arch_gnttab_vfree(struct gnttab_vm_area *area) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 8ecd92a947ee0c..a1a4e2f8163504 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -168,6 +168,7 @@ extern struct vm_struct *__get_vm_area_caller(unsigne= d long size, unsigned long flags, unsigned long start, unsigned long end, const void *caller); +void free_vm_area(struct vm_struct *area); extern struct vm_struct *remove_vm_area(const void *addr); extern struct vm_struct *find_vm_area(const void *addr); =20 @@ -203,10 +204,6 @@ static inline void set_vm_flush_reset_perms(void *ad= dr) } #endif =20 -/* Allocate/destroy a 'vmalloc' VM area. */ -extern struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes); -extern void free_vm_area(struct vm_struct *area); - /* for /dev/kmem */ extern long vread(char *buf, char *addr, unsigned long count); extern long vwrite(char *buf, char *addr, unsigned long count); diff --git a/mm/nommu.c b/mm/nommu.c index 75a327149af127..9272f30e4c4726 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -354,13 +354,6 @@ void vm_unmap_aliases(void) } EXPORT_SYMBOL_GPL(vm_unmap_aliases); =20 -struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes) -{ - BUG(); - return NULL; -} -EXPORT_SYMBOL_GPL(alloc_vm_area); - void free_vm_area(struct vm_struct *area) { BUG(); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 59f2afcf26c312..9f29147deca580 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3077,54 +3077,6 @@ int remap_vmalloc_range(struct vm_area_struct *vma= , void *addr, } EXPORT_SYMBOL(remap_vmalloc_range); =20 -static int f(pte_t *pte, unsigned long addr, void *data) -{ - pte_t ***p =3D data; - - if (p) { - *(*p) =3D pte; - (*p)++; - } - return 0; -} - -/** - * alloc_vm_area - allocate a range of kernel address space - * @size: size of the area - * @ptes: returns the PTEs for the address space - * - * Returns: NULL on failure, vm_struct on success - * - * This function reserves a range of kernel address space, and - * allocates pagetables to map that range. No actual mappings - * are created. - * - * If @ptes is non-NULL, pointers to the PTEs (in init_mm) - * allocated for the VM area are returned. - */ -struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes) -{ - struct vm_struct *area; - - area =3D get_vm_area_caller(size, VM_IOREMAP, - __builtin_return_address(0)); - if (area =3D=3D NULL) - return NULL; - - /* - * This ensures that page tables are constructed for this region - * of kernel virtual address space and mapped into init_mm. - */ - if (apply_to_page_range(&init_mm, (unsigned long)area->addr, - size, f, ptes ? &ptes : NULL)) { - free_vm_area(area); - return NULL; - } - - return area; -} -EXPORT_SYMBOL_GPL(alloc_vm_area); - void free_vm_area(struct vm_struct *area) { struct vm_struct *ret; --=20 2.28.0