From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-dy1-f201.google.com (mail-dy1-f201.google.com [74.125.82.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E92E36BCDA for ; Fri, 24 Apr 2026 19:16:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777058223; cv=none; b=HfxVfTih+/SDVUFfZq7PN2Z/utr7q+03PQE2jbOudSGB3I0RF2BshaVVvKqhB4zYISRhD2KbG7b3dpDeU6i2jULipZ8/sE5WCxSIO+nKplLwFW3PRyCcPkA3CriyWvrPfZjX/rCJwbfdP90u7T9unsIXXOe87YwpMpLiSj1JiVs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777058223; c=relaxed/simple; bh=i/9oPU2HocRmndLUSICVriaABeQQaYNmpuirr7o4KnQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TYazLRCa9eLQKAdqIZ88txpA+HWogtVQjwWjborPPky7m+6csB3o1EIJDsBU3PSB6y/28wrucwt0EOFL0yF81DzPpP077cQg7XZsGRB6Ob3TUTrDN5mMG4+BsM5KK7DTfvCR8WFt/GmnoTUX3OQM+Tfa2+pWxO1LcxTlPfzWGpU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--stevensd.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gHktjsQC; arc=none smtp.client-ip=74.125.82.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--stevensd.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gHktjsQC" Received: by mail-dy1-f201.google.com with SMTP id 5a478bee46e88-2dd6fb4c867so12584010eec.0 for ; Fri, 24 Apr 2026 12:16:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1777058218; x=1777663018; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oo/pdV2AAzpZoUJTB8KwJow4oYxSpXiK/Ch5kfUiDOU=; b=gHktjsQCdSubH4o6ezURtPzrdzAucuGQcKDfXbnQPpIKpTUZ/5JqiIn9CRlXVFCVrL hqmQqrxnWaITJdBxh1iON/QCUTFh5dsHag/oPG7XZGOYsi41T7M3WG6Q2B/RF/Ma8pTP bVPIGTQqsrGtBvC7J7xNnIlTBa/Rn/HiD6m0VPHh3ziuNGH9OSVtydqnPjCFf4joo0mc NJzWDkEznWtKTFhvV/9nMbEhuXFwqQs/Wt3nUjZ+xJuWODPodI4zbHWwYgIIkG4Yqw0t S8a2x7c0Hi+duEhzmhpvR+i9fvuy7duBV2K4seQuG6OzFS9C6nRQoSgpJEJOELtlrLYx EYQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777058218; x=1777663018; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oo/pdV2AAzpZoUJTB8KwJow4oYxSpXiK/Ch5kfUiDOU=; b=sAC1jM4BQG++69vWLIChdhybmxUzMi3OxQR9ehXSrrcaUfWASTVR6xn6+rt4UPrbn7 gaVy+4l8fwcx9EgntZ5goUBtSwvLyeeOhvCGUflEZfTqyxEWQYhMOUIMPWzXBDUA7LZ6 HSCOwkypzxpHIShiQSYsLhznnZ7ZyfX5Ym0HvJaBZOPMAsyNgWbLvMqHwrJZAY1liUP4 bovE1K/5XXa4wNpMJfJCC2hcE8b4Y8607qn+x4AAWjVWvMeD8+Qx7aZvbq9SC4InQEAw AI2jrhbfpdLC7mWkQeS7PMGZPDeNSjUH/r18ENuVqAmUfcw7odmzpkopZVKm5K7z/2Fr p6Dg== X-Forwarded-Encrypted: i=1; AFNElJ97K2E+gXpgsC57i2lytpQrkMLxEwa0mqQ0W7Ny7TmHyF9uEv4Bc5bDqfd3g0PebV8WvjugKSFpjbMIv1M=@vger.kernel.org X-Gm-Message-State: AOJu0YyNm5pPrI6eMW1GbCkGr0TUaE4BZ8yya0MFBt0HGxn0OyJHlF33 nvMj5boa2mGO3/oiVNV/l9DuUU9eHgUcdwrTP0qX09nv7MKqKKQNUoQ1YkTTkmebxjK+to1wmT2 Go8J4rWmAr1o86Q== X-Received: from dlbcf24.prod.google.com ([2002:a05:7022:4598:b0:12d:b2ba:b551]) (user=stevensd job=prod-delivery.src-stubby-dispatcher) by 2002:a05:701b:270f:b0:12d:b993:c68f with SMTP id a92af1059eb24-12db993c9b2mr4770751c88.4.1777058217479; Fri, 24 Apr 2026 12:16:57 -0700 (PDT) Date: Fri, 24 Apr 2026 12:14:48 -0700 In-Reply-To: <20260424191456.2679717-1-stevensd@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260424191456.2679717-1-stevensd@google.com> X-Mailer: git-send-email 2.54.0.rc2.544.gc7ae2d5bb8-goog Message-ID: <20260424191456.2679717-6-stevensd@google.com> Subject: [PATCH v2 05/13] mm/vmalloc: Add a get_vm_area_node() and vmap_pages_range() public functions From: David Stevens To: Pasha Tatashin , Linus Walleij , Will Deacon , Quentin Perret , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Xin Li , Peter Zijlstra , Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Uladzislau Rezki , Kees Cook Cc: David Stevens , linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" From: Pasha Tatashin get_vm_area_node() Unlike the other public get_vm_area_* variants, this one accepts node from which to allocate data structure, and also the align, which allows to create vm area with a specific alignment. This call is going to be used by dynamic stacks in order to ensure that the stack VM area of a specific alignment, and that even if there is only one page mapped, no page table allocations are going to be needed to map the other stack pages. vmap_pages_range() We will need it from kernel/fork.c in order to map the initial stack pages, so export the function and add a forward declaration of this function to the linux/vmalloc.h header. Signed-off-by: Pasha Tatashin Signed-off-by: Linus Walleij [Switched to vmap_pages_range instead of noflush variant, fix typos] Signed-off-by: David Stevens --- include/linux/vmalloc.h | 14 ++++++++++++++ mm/vmalloc.c | 25 +++++++++++++++++++++++++ 2 files changed, 39 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index e8e94f90d686..7b56a0b998ab 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -250,6 +250,9 @@ extern struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, unsigned long start, unsigned long end, const void *caller); +struct vm_struct *get_vm_area_node(unsigned long size, unsigned long align, + unsigned long flags, int node, gfp_t gfp, + const void *caller); void free_vm_area(struct vm_struct *area); extern struct vm_struct *remove_vm_area(const void *addr); extern struct vm_struct *find_vm_area(const void *addr); @@ -301,11 +304,22 @@ static inline void set_vm_flush_reset_perms(void *addr) if (vm) vm->flags |= VM_FLUSH_RESET_PERMS; } + +int __must_check vmap_pages_range(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift); + #else /* !CONFIG_MMU */ #define VMALLOC_TOTAL 0UL static inline unsigned long vmalloc_nr_pages(void) { return 0; } static inline void set_vm_flush_reset_perms(void *addr) {} +static inline +int __must_check vmap_pages_range(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + return -EINVAL; +} + #endif /* CONFIG_MMU */ #if defined(CONFIG_MMU) && defined(CONFIG_SMP) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 61caa55a4402..39b7e118cbce 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -722,6 +722,7 @@ int vmap_pages_range(unsigned long addr, unsigned long end, { return __vmap_pages_range(addr, end, prot, pages, page_shift, GFP_KERNEL); } +EXPORT_SYMBOL_GPL(vmap_pages_range); static int check_sparse_vm_area(struct vm_struct *area, unsigned long start, unsigned long end) @@ -3285,6 +3286,30 @@ struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags, NUMA_NO_NODE, GFP_KERNEL, caller); } +/** + * get_vm_area_node - reserve a contiguous and aligned kernel virtual area + * @size: size of the area + * @align: alignment of the start address of the area + * @flags: %VM_IOREMAP for I/O mappings + * @node: NUMA node from which to allocate the area data structure + * @gfp: Flags to pass to the allocator + * @caller: Caller to be stored in the vm area data structure + * + * Search for an area of @size/align in the kernel virtual mapping area and + * reserve it for our purposes. Returns the area descriptor on success or %NULL + * on failure. + * + * Return: the area descriptor on success or %NULL on failure. + */ +struct vm_struct *get_vm_area_node(unsigned long size, unsigned long align, + unsigned long flags, int node, gfp_t gfp, + const void *caller) +{ + return __get_vm_area_node(size, align, PAGE_SHIFT, flags, + VMALLOC_START, VMALLOC_END, + node, gfp, caller); +} + /** * find_vm_area - find a continuous kernel virtual area * @addr: base address -- 2.54.0.rc2.544.gc7ae2d5bb8-goog