From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org,
Alexandru Elisei <alexandru.elisei@arm.com>,
Alexander Potapenko <glider@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
Brendan Jackman <jackmanb@google.com>,
Christoph Lameter <cl@gentwo.org>,
Dennis Zhou <dennis@kernel.org>,
Dmitry Vyukov <dvyukov@google.com>,
dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
iommu@lists.linux.dev, io-uring@vger.kernel.org,
Jason Gunthorpe <jgg@nvidia.com>, Jens Axboe <axboe@kernel.dk>,
Johannes Weiner <hannes@cmpxchg.org>,
John Hubbard <jhubbard@nvidia.com>,
kasan-dev@googlegroups.com, kvm@vger.kernel.org,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org,
linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org,
linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org,
linux-mmc@vger.kernel.org, linux-mm@kvack.org,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
linux-scsi@vger.kernel.org, Marco Elver <elver@google.com>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Michal Hocko <mhocko@suse.com>, Mike Rapoport <rppt@kernel.org>,
Muchun Song <muchun.song@linux.dev>,
netdev@vger.kernel.org, Oscar Salvador <osalvador@suse.de>,
Peter Xu <peterx@redhat.com>, Robin Murphy <robin.murphy@arm.com>,
Suren Baghdasaryan <surenb@google.com>, Tejun Heo <tj@kernel.org>,
virtualization@lists.linux.dev, Vlastimil Babka <vbabka@suse.cz>,
wireguard@lists.zx2c4.com, x86@kernel.org,
Zi Yan <ziy@nvidia.com>
Subject: Re: [PATCH v1 21/36] mm/cma: refuse handing out non-contiguous page ranges
Date: Thu, 28 Aug 2025 18:28:33 +0100 [thread overview]
Message-ID: <b772a0c0-6e09-4fa4-a113-fe5adf9c7fe0@lucifer.local> (raw)
In-Reply-To: <20250827220141.262669-22-david@redhat.com>
On Thu, Aug 28, 2025 at 12:01:25AM +0200, David Hildenbrand wrote:
> Let's disallow handing out PFN ranges with non-contiguous pages, so we
> can remove the nth-page usage in __cma_alloc(), and so any callers don't
> have to worry about that either when wanting to blindly iterate pages.
>
> This is really only a problem in configs with SPARSEMEM but without
> SPARSEMEM_VMEMMAP, and only when we would cross memory sections in some
> cases.
I'm guessing this is something that we don't need to worry about in
reality?
>
> Will this cause harm? Probably not, because it's mostly 32bit that does
> not support SPARSEMEM_VMEMMAP. If this ever becomes a problem we could
> look into allocating the memmap for the memory sections spanned by a
> single CMA region in one go from memblock.
>
> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM other than refactoring point below.
CMA stuff looks fine afaict after staring at it for a while, on proviso
that handing out ranges within the same section is always going to be the
case.
Anyway overall,
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/mm.h | 6 ++++++
> mm/cma.c | 39 ++++++++++++++++++++++++---------------
> mm/util.c | 33 +++++++++++++++++++++++++++++++++
> 3 files changed, 63 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f6880e3225c5c..2ca1eb2db63ec 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -209,9 +209,15 @@ extern unsigned long sysctl_user_reserve_kbytes;
> extern unsigned long sysctl_admin_reserve_kbytes;
>
> #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
> +bool page_range_contiguous(const struct page *page, unsigned long nr_pages);
> #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n))
> #else
> #define nth_page(page,n) ((page) + (n))
> +static inline bool page_range_contiguous(const struct page *page,
> + unsigned long nr_pages)
> +{
> + return true;
> +}
> #endif
>
> /* to align the pointer to the (next) page boundary */
> diff --git a/mm/cma.c b/mm/cma.c
> index e56ec64d0567e..813e6dc7b0954 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -780,10 +780,8 @@ static int cma_range_alloc(struct cma *cma, struct cma_memrange *cmr,
> unsigned long count, unsigned int align,
> struct page **pagep, gfp_t gfp)
> {
> - unsigned long mask, offset;
> - unsigned long pfn = -1;
> - unsigned long start = 0;
> unsigned long bitmap_maxno, bitmap_no, bitmap_count;
> + unsigned long start, pfn, mask, offset;
> int ret = -EBUSY;
> struct page *page = NULL;
>
> @@ -795,7 +793,7 @@ static int cma_range_alloc(struct cma *cma, struct cma_memrange *cmr,
> if (bitmap_count > bitmap_maxno)
> goto out;
>
> - for (;;) {
> + for (start = 0; ; start = bitmap_no + mask + 1) {
> spin_lock_irq(&cma->lock);
> /*
> * If the request is larger than the available number
> @@ -812,6 +810,22 @@ static int cma_range_alloc(struct cma *cma, struct cma_memrange *cmr,
> spin_unlock_irq(&cma->lock);
> break;
> }
> +
> + pfn = cmr->base_pfn + (bitmap_no << cma->order_per_bit);
> + page = pfn_to_page(pfn);
> +
> + /*
> + * Do not hand out page ranges that are not contiguous, so
> + * callers can just iterate the pages without having to worry
> + * about these corner cases.
> + */
> + if (!page_range_contiguous(page, count)) {
> + spin_unlock_irq(&cma->lock);
> + pr_warn_ratelimited("%s: %s: skipping incompatible area [0x%lx-0x%lx]",
> + __func__, cma->name, pfn, pfn + count - 1);
> + continue;
> + }
> +
> bitmap_set(cmr->bitmap, bitmap_no, bitmap_count);
> cma->available_count -= count;
> /*
> @@ -821,29 +835,24 @@ static int cma_range_alloc(struct cma *cma, struct cma_memrange *cmr,
> */
> spin_unlock_irq(&cma->lock);
>
> - pfn = cmr->base_pfn + (bitmap_no << cma->order_per_bit);
> mutex_lock(&cma->alloc_mutex);
> ret = alloc_contig_range(pfn, pfn + count, ACR_FLAGS_CMA, gfp);
> mutex_unlock(&cma->alloc_mutex);
> - if (ret == 0) {
> - page = pfn_to_page(pfn);
> + if (!ret)
> break;
> - }
>
> cma_clear_bitmap(cma, cmr, pfn, count);
> if (ret != -EBUSY)
> break;
>
> pr_debug("%s(): memory range at pfn 0x%lx %p is busy, retrying\n",
> - __func__, pfn, pfn_to_page(pfn));
> + __func__, pfn, page);
>
> - trace_cma_alloc_busy_retry(cma->name, pfn, pfn_to_page(pfn),
> - count, align);
> - /* try again with a bit different memory target */
> - start = bitmap_no + mask + 1;
> + trace_cma_alloc_busy_retry(cma->name, pfn, page, count, align);
> }
> out:
> - *pagep = page;
> + if (!ret)
> + *pagep = page;
> return ret;
> }
>
> @@ -882,7 +891,7 @@ static struct page *__cma_alloc(struct cma *cma, unsigned long count,
> */
> if (page) {
> for (i = 0; i < count; i++)
> - page_kasan_tag_reset(nth_page(page, i));
> + page_kasan_tag_reset(page + i);
> }
>
> if (ret && !(gfp & __GFP_NOWARN)) {
> diff --git a/mm/util.c b/mm/util.c
> index d235b74f7aff7..0bf349b19b652 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -1280,4 +1280,37 @@ unsigned int folio_pte_batch(struct folio *folio, pte_t *ptep, pte_t pte,
> {
> return folio_pte_batch_flags(folio, NULL, ptep, &pte, max_nr, 0);
> }
> +
> +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
> +/**
> + * page_range_contiguous - test whether the page range is contiguous
> + * @page: the start of the page range.
> + * @nr_pages: the number of pages in the range.
> + *
> + * Test whether the page range is contiguous, such that they can be iterated
> + * naively, corresponding to iterating a contiguous PFN range.
> + *
> + * This function should primarily only be used for debug checks, or when
> + * working with page ranges that are not naturally contiguous (e.g., pages
> + * within a folio are).
> + *
> + * Returns true if contiguous, otherwise false.
> + */
> +bool page_range_contiguous(const struct page *page, unsigned long nr_pages)
> +{
> + const unsigned long start_pfn = page_to_pfn(page);
> + const unsigned long end_pfn = start_pfn + nr_pages;
> + unsigned long pfn;
> +
> + /*
> + * The memmap is allocated per memory section. We need to check
> + * each involved memory section once.
> + */
> + for (pfn = ALIGN(start_pfn, PAGES_PER_SECTION);
> + pfn < end_pfn; pfn += PAGES_PER_SECTION)
> + if (unlikely(page + (pfn - start_pfn) != pfn_to_page(pfn)))
> + return false;
I find this pretty confusing, my test for this is how many times I have to read
the code to understand what it's doing :)
So we have something like:
(pfn of page)
start_pfn pfn = align UP
| |
v v
| section |
<----------------->
pfn - start_pfn
Then check page + (pfn - start_pfn) == pfn_to_page(pfn)
And loop such that:
(pfn of page)
start_pfn pfn
| |
v v
| section | section |
<------------------------------------------>
pfn - start_pfn
Again check page + (pfn - start_pfn) == pfn_to_page(pfn)
And so on.
So the logic looks good, but it's just... that took me a hot second to
parse :)
I think a few simple fixups
bool page_range_contiguous(const struct page *page, unsigned long nr_pages)
{
const unsigned long start_pfn = page_to_pfn(page);
const unsigned long end_pfn = start_pfn + nr_pages;
/* The PFN of the start of the next section. */
unsigned long pfn = ALIGN(start_pfn, PAGES_PER_SECTION);
/* The page we'd expected to see if the range were contiguous. */
struct page *expected = page + (pfn - start_pfn);
/*
* The memmap is allocated per memory section. We need to check
* each involved memory section once.
*/
for (; pfn < end_pfn; pfn += PAGES_PER_SECTION, expected += PAGES_PER_SECTION)
if (unlikely(expected != pfn_to_page(pfn)))
return false;
return true;
}
> + return true;
> +}
> +#endif
> #endif /* CONFIG_MMU */
> --
> 2.50.1
>
next prev parent reply other threads:[~2025-08-28 17:29 UTC|newest]
Thread overview: 140+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-27 22:01 [PATCH v1 00/36] mm: remove nth_page() David Hildenbrand
2025-08-27 22:01 ` [PATCH v1 01/36] mm: stop making SPARSEMEM_VMEMMAP user-selectable David Hildenbrand
2025-08-28 7:18 ` Wei Yang
2025-08-28 14:11 ` Lorenzo Stoakes
2025-08-29 0:26 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 02/36] arm64: Kconfig: drop superfluous "select SPARSEMEM_VMEMMAP" David Hildenbrand
2025-08-28 10:43 ` Catalin Marinas
2025-08-28 14:12 ` Lorenzo Stoakes
2025-08-29 0:27 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 03/36] s390/Kconfig: " David Hildenbrand
2025-08-28 14:12 ` Lorenzo Stoakes
2025-08-29 0:28 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 04/36] x86/Kconfig: " David Hildenbrand
2025-08-27 22:52 ` Dave Hansen
2025-08-28 14:25 ` Lorenzo Stoakes
2025-08-29 0:28 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 05/36] wireguard: selftests: remove CONFIG_SPARSEMEM_VMEMMAP=y from qemu kernel config David Hildenbrand
2025-08-28 14:26 ` Lorenzo Stoakes
2025-08-29 0:29 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 06/36] mm/page_alloc: reject unreasonable folio/compound page sizes in alloc_contig_range_noprof() David Hildenbrand
2025-08-28 7:31 ` Wei Yang
2025-08-28 14:37 ` Lorenzo Stoakes
2025-08-29 10:06 ` David Hildenbrand
2025-08-29 12:31 ` Lorenzo Stoakes
2025-08-29 13:09 ` David Hildenbrand
2025-08-29 0:33 ` Liam R. Howlett
2025-08-29 9:58 ` David Hildenbrand
2025-08-27 22:01 ` [PATCH v1 07/36] mm/memremap: reject unreasonable folio/compound page sizes in memremap_pages() David Hildenbrand
2025-08-28 14:39 ` Lorenzo Stoakes
2025-08-29 0:34 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 08/36] mm/hugetlb: check for unreasonable folio sizes when registering hstate David Hildenbrand
2025-08-28 1:04 ` Zi Yan
2025-08-28 14:45 ` Lorenzo Stoakes
2025-08-29 10:07 ` David Hildenbrand
2025-08-29 12:18 ` Lorenzo Stoakes
2025-08-29 0:35 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 09/36] mm/mm_init: make memmap_init_compound() look more like prep_compound_page() David Hildenbrand
2025-08-28 7:35 ` Wei Yang
2025-08-28 14:54 ` Lorenzo Stoakes
2025-08-29 0:37 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 10/36] mm: sanity-check maximum folio size in folio_set_order() David Hildenbrand
2025-08-28 7:35 ` Wei Yang
2025-08-28 15:00 ` Lorenzo Stoakes
2025-08-29 10:10 ` David Hildenbrand
2025-08-29 12:18 ` Lorenzo Stoakes
2025-08-29 14:24 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 11/36] mm: limit folio/compound page sizes in problematic kernel configs David Hildenbrand
2025-08-28 7:37 ` Wei Yang
2025-08-28 15:10 ` Lorenzo Stoakes
2025-08-29 11:57 ` David Hildenbrand
2025-08-29 12:01 ` Lorenzo Stoakes
2025-08-29 14:27 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 12/36] mm: simplify folio_page() and folio_page_idx() David Hildenbrand
2025-08-28 7:43 ` Wei Yang
2025-08-28 7:46 ` David Hildenbrand
2025-08-28 8:18 ` Wei Yang
2025-08-28 15:24 ` Lorenzo Stoakes
2025-08-29 14:41 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 13/36] mm/hugetlb: cleanup hugetlb_folio_init_tail_vmemmap() David Hildenbrand
2025-08-28 7:21 ` Mike Rapoport
2025-08-28 7:44 ` David Hildenbrand
2025-08-28 8:06 ` Mike Rapoport
2025-08-28 8:18 ` David Hildenbrand
2025-08-28 8:37 ` Mike Rapoport
2025-08-29 12:00 ` David Hildenbrand
2025-08-28 15:37 ` Lorenzo Stoakes
2025-08-29 11:59 ` David Hildenbrand
2025-08-29 12:02 ` Lorenzo Stoakes
2025-08-29 14:57 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 14/36] mm/mm/percpu-km: drop nth_page() usage within single allocation David Hildenbrand
2025-08-28 15:43 ` Lorenzo Stoakes
2025-08-29 14:59 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 15/36] fs: hugetlbfs: remove nth_page() usage within folio in adjust_range_hwpoison() David Hildenbrand
2025-08-28 1:14 ` [PATCH v1 15/36] " Zi Yan
2025-08-28 15:45 ` [PATCH v1 15/36] fs: " Lorenzo Stoakes
2025-08-29 12:02 ` David Hildenbrand
2025-08-29 12:09 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 16/36] fs: hugetlbfs: cleanup " David Hildenbrand
2025-08-28 1:18 ` [PATCH v1 16/36] " Zi Yan
2025-08-28 16:20 ` [PATCH v1 16/36] fs: " Lorenzo Stoakes
2025-08-29 13:22 ` David Hildenbrand
2025-08-27 22:01 ` [PATCH v1 17/36] mm/pagewalk: drop nth_page() usage within folio in folio_walk_start() David Hildenbrand
2025-08-28 16:21 ` Lorenzo Stoakes
2025-08-29 15:11 ` Liam R. Howlett
2025-08-27 22:01 ` [PATCH v1 18/36] mm/gup: drop nth_page() usage within folio when recording subpages David Hildenbrand
2025-08-28 16:37 ` Lorenzo Stoakes
2025-08-29 13:41 ` David Hildenbrand
2025-08-29 15:19 ` Lorenzo Stoakes
2025-09-01 11:35 ` David Hildenbrand
2025-08-27 22:01 ` [PATCH v1 19/36] io_uring/zcrx: remove nth_page() usage within folio David Hildenbrand
2025-08-28 16:48 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 20/36] mips: mm: convert __flush_dcache_pages() to __flush_dcache_folio_pages() David Hildenbrand
2025-08-28 16:57 ` Lorenzo Stoakes
2025-08-28 20:51 ` David Hildenbrand
2025-08-29 12:51 ` Lorenzo Stoakes
2025-08-29 13:44 ` David Hildenbrand
2025-08-29 14:45 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 21/36] mm/cma: refuse handing out non-contiguous page ranges David Hildenbrand
2025-08-28 17:28 ` Lorenzo Stoakes [this message]
2025-08-29 14:34 ` David Hildenbrand
2025-08-29 14:44 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 22/36] dma-remap: drop nth_page() in dma_common_contiguous_remap() David Hildenbrand
2025-08-28 17:29 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 23/36] scatterlist: disallow non-contigous page ranges in a single SG entry David Hildenbrand
2025-08-28 17:53 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 24/36] ata: libata-eh: drop nth_page() usage within " David Hildenbrand
2025-08-28 4:24 ` Damien Le Moal
2025-08-28 17:53 ` Lorenzo Stoakes
2025-08-29 0:22 ` Damien Le Moal
2025-08-29 14:37 ` David Hildenbrand
2025-08-27 22:01 ` [PATCH v1 25/36] drm/i915/gem: " David Hildenbrand
2025-08-28 17:55 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 26/36] mspro_block: " David Hildenbrand
2025-08-28 17:56 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 27/36] memstick: " David Hildenbrand
2025-08-28 17:57 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 28/36] mmc: " David Hildenbrand
2025-08-28 17:59 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 29/36] scsi: scsi_lib: " David Hildenbrand
2025-08-28 18:00 ` Lorenzo Stoakes
2025-08-31 1:03 ` Martin K. Petersen
2025-08-27 22:01 ` [PATCH v1 30/36] scsi: sg: " David Hildenbrand
2025-08-28 18:00 ` Lorenzo Stoakes
2025-08-31 1:04 ` Martin K. Petersen
2025-08-27 22:01 ` [PATCH v1 31/36] vfio/pci: " David Hildenbrand
2025-08-28 18:01 ` Lorenzo Stoakes
2025-08-28 18:52 ` Alex Williamson
2025-08-28 20:15 ` Brett Creeley
2025-08-27 22:01 ` [PATCH v1 32/36] crypto: remove " David Hildenbrand
2025-08-28 18:02 ` Lorenzo Stoakes
2025-08-30 8:50 ` Herbert Xu
2025-08-27 22:01 ` [PATCH v1 33/36] mm/gup: drop nth_page() usage in unpin_user_page_range_dirty_lock() David Hildenbrand
[not found] ` <c9527014-9a29-48f4-8ca9-a6226f962c00@lucifer.local>
2025-08-29 14:41 ` David Hildenbrand
2025-08-27 22:01 ` [PATCH v1 34/36] kfence: drop nth_page() usage David Hildenbrand
2025-08-28 8:43 ` Marco Elver
2025-08-28 18:19 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 35/36] block: update comment of "struct bio_vec" regarding nth_page() David Hildenbrand
2025-08-28 18:19 ` Lorenzo Stoakes
2025-08-27 22:01 ` [PATCH v1 36/36] mm: remove nth_page() David Hildenbrand
[not found] ` <18c6a175-507f-464c-b776-67d346863ddf@lucifer.local>
2025-08-29 14:42 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b772a0c0-6e09-4fa4-a113-fe5adf9c7fe0@lucifer.local \
--to=lorenzo.stoakes@oracle.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=alexandru.elisei@arm.com \
--cc=axboe@kernel.dk \
--cc=cl@gentwo.org \
--cc=david@redhat.com \
--cc=dennis@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=dvyukov@google.com \
--cc=elver@google.com \
--cc=glider@google.com \
--cc=hannes@cmpxchg.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=io-uring@vger.kernel.org \
--cc=iommu@lists.linux.dev \
--cc=jackmanb@google.com \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=kasan-dev@googlegroups.com \
--cc=kvm@vger.kernel.org \
--cc=linux-arm-kernel@axis.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-ide@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-mmc@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=m.szyprowski@samsung.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=netdev@vger.kernel.org \
--cc=osalvador@suse.de \
--cc=peterx@redhat.com \
--cc=robin.murphy@arm.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
--cc=virtualization@lists.linux.dev \
--cc=wireguard@lists.zx2c4.com \
--cc=x86@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).