From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D310F38B141 for ; Thu, 19 Mar 2026 17:19:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773940762; cv=none; b=UorR0ZMClsC5AIzaLe+PgywXMSlheCiKfzvrCtuL20Xh/qga2/Rx5gCHXfcrCP2zRmCXwSWVuDaBAG+/Vm/OaKaDBvfRiF1rAcXymPT2/bD7v6A+T5gXeMXaGWa0TDBI8OMLsfYEnijk5N+Am5CZbn2v8Q8lH3r5HDhNJR1TPwE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773940762; c=relaxed/simple; bh=CRe67mkIC0ESRXKT0XqoUIZu/d4VKlx16OMwh4bOfWU=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=rTgw+xai9vFUii7WI9MfeyAXiGniID/omKAQsOwYFAZ63OHkYJ0gNTiX5NFoK9wTkmjlaH54ubmggRCU9dL7dhdgwLA822jovz1YS+V5WUPnEmzib9LDam1qXkSekju6bXDQWLXsUxon2ixb/IdKH/xluf1+Wv6XBxf8fGCiH28= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=b+wzekxD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="b+wzekxD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 805AAC2BCAF; Thu, 19 Mar 2026 17:19:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773940762; bh=CRe67mkIC0ESRXKT0XqoUIZu/d4VKlx16OMwh4bOfWU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=b+wzekxDzyYGWpd8oVd26XrZqlejzgkU5gHUwgEyk3UYBxVd1s3wxcoofE4uru0/G CkPW70WWc8Eai1R55nSb480qEWmyHlvjuKhGSnQbQtXIFNFfAwCzPyQdKI8hvzSgGJ EE/pcfqqJiY4TRtY6NVBktWL12FERYck2tbIvp1Mp8ImeSyMa0gkSbHvJ2e5bz7OR9 JaA3JPmxCdxGWQWSruKgN/SV8VsHrodfZTNv/R05Y0PsYJHmGptKtHAQhWUPAEElXo DWiAnoDhU/Bn29O5cXMPK3OiNrnocQCKss2Bh9QJz8cC44uDSq+p73TG7gEgyag0n/ c1OsrJZNMKkyg== Date: Thu, 19 Mar 2026 19:19:15 +0200 From: Mike Rapoport To: Hubert Mazur Cc: Andrew Morton , Greg Kroah-Hartman , Stanislaw Kardach , Michal Krawczyk , Slawomir Rosek , Lukasz Majczak , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3] mm/execmem: Make the populate and alloc atomic Message-ID: References: <20260319085907.3510446-1-hmazur@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260319085907.3510446-1-hmazur@google.com> On Thu, Mar 19, 2026 at 08:59:07AM +0000, Hubert Mazur wrote: > When a memory block is requested from the execmem manager, it tries > to find a suitable fragment in the free_areas. In case there is no > such block, a new memory area is added to free_areas and then > allocated to the caller. Those two operations must be atomic > to ensure that no other memory request consumes new block. Sorry if I was not clear, the motivation for the patch you had in the cover letter in v2 should have been put in the commit message rather than completely dropped. > Signed-off-by: Hubert Mazur > --- > Changes in v3: > - Addressed the maintainer comments regarding style issues > - Removed unnecessary conditional statement > > Changes in v2: > The __execmem_cache_alloc_locked function (lockless version of > __execmem_cache_alloc) is introduced and called after > execmem_cache_add_locked from the __execmem_cache_populate_alloc > function (renamed from execmem_cache_populate). Both calls are > guarded now with a single mutex. > > Link to v2: > https://lore.kernel.org/all/20260317125020.1293472-2-hmazur@google.com/ > > Changes in v1: > Allocate new memory fragment and assign it directly to the busy_areas > inside execmem_cache_populate function. > > Link to v1: > https://lore.kernel.org/all/20260312131438.361746-1-hmazur@google.com/T/#t > > mm/execmem.c | 55 +++++++++++++++++++++++++++------------------------- > 1 file changed, 29 insertions(+), 26 deletions(-) > > diff --git a/mm/execmem.c b/mm/execmem.c > index 810a4ba9c924..4477bb9209ab 100644 > --- a/mm/execmem.c > +++ b/mm/execmem.c > @@ -203,13 +203,6 @@ static int execmem_cache_add_locked(void *ptr, size_t size, gfp_t gfp_mask) > return mas_store_gfp(&mas, (void *)lower, gfp_mask); > } > > -static int execmem_cache_add(void *ptr, size_t size, gfp_t gfp_mask) > -{ > - guard(mutex)(&execmem_cache.mutex); > - > - return execmem_cache_add_locked(ptr, size, gfp_mask); > -} > - > static bool within_range(struct execmem_range *range, struct ma_state *mas, > size_t size) > { > @@ -225,18 +218,16 @@ static bool within_range(struct execmem_range *range, struct ma_state *mas, > return false; > } > > -static void *__execmem_cache_alloc(struct execmem_range *range, size_t size) > +static void *execmem_cache_alloc_locked(struct execmem_range *range, size_t size) > { > struct maple_tree *free_areas = &execmem_cache.free_areas; > struct maple_tree *busy_areas = &execmem_cache.busy_areas; > MA_STATE(mas_free, free_areas, 0, ULONG_MAX); > MA_STATE(mas_busy, busy_areas, 0, ULONG_MAX); > - struct mutex *mutex = &execmem_cache.mutex; > unsigned long addr, last, area_size = 0; > void *area, *ptr = NULL; > int err; > > - mutex_lock(mutex); > mas_for_each(&mas_free, area, ULONG_MAX) { > area_size = mas_range_len(&mas_free); > > @@ -245,7 +236,7 @@ static void *__execmem_cache_alloc(struct execmem_range *range, size_t size) > } > > if (area_size < size) > - goto out_unlock; > + return NULL; > > addr = mas_free.index; > last = mas_free.last; > @@ -254,7 +245,7 @@ static void *__execmem_cache_alloc(struct execmem_range *range, size_t size) > mas_set_range(&mas_busy, addr, addr + size - 1); > err = mas_store_gfp(&mas_busy, (void *)addr, GFP_KERNEL); > if (err) > - goto out_unlock; > + return NULL; > > mas_store_gfp(&mas_free, NULL, GFP_KERNEL); > if (area_size > size) { > @@ -268,19 +259,25 @@ static void *__execmem_cache_alloc(struct execmem_range *range, size_t size) > err = mas_store_gfp(&mas_free, ptr, GFP_KERNEL); > if (err) { > mas_store_gfp(&mas_busy, NULL, GFP_KERNEL); > - goto out_unlock; > + return NULL; > } > } > ptr = (void *)addr; > > -out_unlock: > - mutex_unlock(mutex); > return ptr; > } > > -static int execmem_cache_populate(struct execmem_range *range, size_t size) > +static void *__execmem_cache_alloc(struct execmem_range *range, size_t size) > +{ > + guard(mutex)(&execmem_cache.mutex); > + > + return execmem_cache_alloc_locked(range, size); > +} > + > +static void *execmem_cache_populate_alloc(struct execmem_range *range, size_t size) > { > unsigned long vm_flags = VM_ALLOW_HUGE_VMAP; > + struct mutex *mutex = &execmem_cache.mutex; > struct vm_struct *vm; > size_t alloc_size; > int err = -ENOMEM; > @@ -294,7 +291,7 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size) > } > > if (!p) > - return err; > + return NULL; > > vm = find_vm_area(p); > if (!vm) > @@ -307,33 +304,39 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size) > if (err) > goto err_free_mem; > > - err = execmem_cache_add(p, alloc_size, GFP_KERNEL); > + /* > + * New memory blocks must be propagated and allocated as an atomic Nit: ^ allocated and added to the cache > + * operation, otherwise it may be consumed by a parallel call ^ they > + * to the execmem_cache_alloc function. > + */ > + mutex_lock(mutex); > + err = execmem_cache_add_locked(p, alloc_size, GFP_KERNEL); > if (err) > goto err_reset_direct_map; > > - return 0; > + p = execmem_cache_alloc_locked(range, size); > + > + mutex_unlock(mutex); > + > + return p; > > err_reset_direct_map: > + mutex_unlock(mutex); > execmem_set_direct_map_valid(vm, true); > err_free_mem: > vfree(p); > - return err; > + return NULL; > } > > static void *execmem_cache_alloc(struct execmem_range *range, size_t size) > { > void *p; > - int err; > > p = __execmem_cache_alloc(range, size); > if (p) > return p; > > - err = execmem_cache_populate(range, size); > - if (err) > - return NULL; > - > - return __execmem_cache_alloc(range, size); > + return execmem_cache_populate_alloc(range, size); > } > > static inline bool is_pending_free(void *ptr) > -- > 2.53.0.851.ga537e3e6e9-goog > -- Sincerely yours, Mike.