public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Mike Rapoport <rppt@kernel.org>
To: Hubert Mazur <hmazur@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Stanislaw Kardach <skardach@google.com>,
	Michal Krawczyk <mikrawczyk@google.com>,
	Slawomir Rosek <srosek@google.com>,
	Lukasz Majczak <lmajczak@google.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 0/1] Encapsulate the populate and alloc as one atomic
Date: Wed, 18 Mar 2026 16:30:53 +0200	[thread overview]
Message-ID: <abq3HenO8f1c0WBS@kernel.org> (raw)
In-Reply-To: <20260317125020.1293472-1-hmazur@google.com>

Hi Hubert,

On Tue, Mar 17, 2026 at 12:50:19PM +0000, Hubert Mazur wrote:
> Hello,
> thanks for the review of the v1 patchset. I tried to make v2 diff as
> small as possible and without a modification of the core logic.
> 
> When a block of memory is requested from the execmem manager the
> free_areas tree is traversed to find area of given size. If it is not
> found then a new fragment, aligned to a PAGE_SIZE, is allocated and
> added to free_areas. Afterwards, the free_areas tree is being traversed
> again to fullfil the request.
> 
> The above operations of allocation and tree traversal are not atomic
> hence another request may consume this newly allocated memory
> block dedicated to the original request. As a result - the first
> request fails to get the memory. Such occurence can be spotted on
> evices running the 6.18 kernel during the paralell modules loading.

typo: devices

In general, a single patch does not require cover letter, but the details
about why the patch is needed should be a part of the commit message.
 
> Regards
> Hubert
> 
> Changes in v2:
> The __execmem_cache_alloc_locked function (lockless version of
> __execmem_cache_alloc) is introduced and called after
> execmem_cache_add_locked from the __execmem_cache_populate_alloc
> function (renamed from execmem_cache_populate). Both calls are
> guarded now with a single mutex.
> 
> Changes in v1:
> Allocate new memory fragment and assign it directly to the busy_areas
> inside execmem_cache_populate function.
> 
> Link to v1:
> https://lore.kernel.org/all/20260312131438.361746-1-hmazur@google.com/T/#t
> 
> Hubert Mazur (1):
>   mm/execmem: Make the populate and alloc atomic
> 
>  mm/execmem.c | 61 +++++++++++++++++++++++++++++-----------------------
>  1 file changed, 34 insertions(+), 27 deletions(-)
> 
> --
> 2.53.0.851.ga537e3e6e9-goog
> 

-- 
Sincerely yours,
Mike.


      parent reply	other threads:[~2026-03-18 14:31 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-17 12:50 [PATCH v2 0/1] Encapsulate the populate and alloc as one atomic Hubert Mazur
2026-03-17 12:50 ` [PATCH v2 1/1] mm/execmem: Make the populate and alloc atomic Hubert Mazur
2026-03-18 14:41   ` Mike Rapoport
2026-03-18 14:30 ` Mike Rapoport [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=abq3HenO8f1c0WBS@kernel.org \
    --to=rppt@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=hmazur@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lmajczak@google.com \
    --cc=mikrawczyk@google.com \
    --cc=skardach@google.com \
    --cc=srosek@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox