* [PATCH v7 0/6] Migrate on fault for device pages
@ 2026-03-30 4:30 mpenttil
2026-03-30 4:30 ` [PATCH v7 1/6] mm:/Kconfig changes for migrate " mpenttil
` (5 more replies)
0 siblings, 6 replies; 15+ messages in thread
From: mpenttil @ 2026-03-30 4:30 UTC (permalink / raw)
To: linux-mm; +Cc: linux-kernel, Mika Penttilä
From: Mika Penttilä <mpenttil@redhat.com>
Currently, the way device page faulting and migration works
is not optimal, if you want to do both fault handling and
migration at once.
Being able to migrate not present pages (or pages mapped with incorrect
permissions, eg. COW) to the GPU requires doing either of the
following sequences:
1. hmm_range_fault() - fault in non-present pages with correct permissions, etc.
2. migrate_vma_*() - migrate the pages
Or:
1. migrate_vma_*() - migrate present pages
2. If non-present pages detected by migrate_vma_*():
a) call hmm_range_fault() to fault pages in
b) call migrate_vma_*() again to migrate now present pages
The problem with the first sequence is that you always have to do two
page walks even when most of the time the pages are present or zero page
mappings so the common case takes a performance hit.
The second sequence is better for the common case, but far worse if
pages aren't present because now you have to walk the page tables three
times (once to find the page is not present, once so hmm_range_fault()
can find a non-present page to fault in and once again to setup the
migration). It is also tricky to code correctly. One page table walk
could costs over 1000 cpu cycles on X86-64, which is a significant hit.
We should be able to walk the page table once, faulting
pages in as required and replacing them with migration entries if
requested.
Add a new flag to HMM APIs, HMM_PFN_REQ_MIGRATE,
which tells to prepare for migration also during fault handling.
Also, for the migrate_vma_setup() call paths, a flag, MIGRATE_VMA_FAULT,
is added to tell to add fault handling to migrate.
One extra benefit of migrating with hmm_range_fault() path
is the migrate_vma.vma gets populated, so no need to
retrieve that separataly.
Tested in X86-64 VM with HMM test device, passing the selftests.
For performance, the migrate throughput tests from the selftests
show similar numbers (within error margin) as unmodified kernel.
Tested also rebased on the
"Remove device private pages from physical address space" series:
https://lore.kernel.org/linux-mm/20260130111050.53670-1-jniethe@nvidia.com/
plus a small patch to adjust with no problems.
Changes v6-v7
- rebase on 7.0.0-rc6
- added documentation and comments
- denote to be migrated zero page as HMM_PFN_MIGRATE alone
- got rid of HMM_PFN_INOUT_FLAGS movement in patch 2
- picked up Acked-By from David for patch 1
Changes v5-v6
- rebase on 7.0.0-rc4
- use range based TLB flushing while unmapping ptes
- gate migration behind HMM_PFN_REQ_MIGRATE for fault and
migrate paths
- always infer migration flags from migrate->flags only
Changes v4-v5
- rebase on 6.19
- fixed David's email address
- fixed link issue without CONFIG_TRANSPARENT_HUGEPAGE
- refactored into smaller commits
- added more comments to code
Changes v3-v4:
- rebase on 6.19-rc8
- fixed issues found by kernel test robot with random configs
- fixed typos
Changes v2-v3:
- rebase on 6.19-rc7
- fixed issues found by kernel test robot
- fixed smatch issues reported by Dan Carpenter <dan.carpenter@linaro.org>
- fixes to lock handling (pmd/pte) on errors
- added assertions for pmd/pte lock states
- other issues discovered by Matthew, thanks!
Changes v1-v2:
- rebase on 6.19-rc6
- fixed issues found by kernel test robot
- fixed locking (pmd/ptl) to cover handle_ and prepare_ regions
parts if migrating
- other issues discovered by Matthew, thanks!
Changes RFC-v1:
- rebase on 6.19-rc5
- adjust for the device THP
- changes from feedback
Revisions:
- RFC https://lore.kernel.org/linux-mm/20250814072045.3637192-1-mpenttil@redhat.com/
- v1: https://lore.kernel.org/all/20260114091923.3950465-1-mpenttil@redhat.com/
- v2: https://lore.kernel.org/all/20260119112502.645059-1-mpenttil@redhat.com/
- v3: https://lore.kernel.org/all/20260126111939.1332983-2-mpenttil@redhat.com/
- v4: https://lore.kernel.org/all/20260202112622.2104213-1-mpenttil@redhat.com/
- v5: https://lore.kernel.org/linux-mm/20260211081301.2940672-1-mpenttil@redhat.com/
- v6: https://lore.kernel.org/linux-mm/20260316062407.3354636-1-mpenttil@redhat.com/
Mika Penttilä (6):
mm:/Kconfig changes for migrate on fault for device pages
mm: Add helper to convert HMM pfn to migrate pfn
mm/hmm: do the plumbing for HMM to participate in migration
mm: setup device page migration in HMM pagewalk
mm: add new testcase for the migrate on fault case
mm:/migrate_device.c: remove migrate_vma_collect_*()
include/linux/hmm.h | 18 +-
include/linux/migrate.h | 25 +-
lib/test_hmm.c | 101 ++-
lib/test_hmm_uapi.h | 19 +-
mm/Kconfig | 2 +
mm/hmm.c | 821 +++++++++++++++++++++++--
mm/migrate_device.c | 591 +++---------------
tools/testing/selftests/mm/hmm-tests.c | 54 ++
8 files changed, 1053 insertions(+), 578 deletions(-)
base-commit: 7aaa8047eafd0bd628065b15757d9b48c5f9c07d
--
2.50.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v7 1/6] mm:/Kconfig changes for migrate on fault for device pages
2026-03-30 4:30 [PATCH v7 0/6] Migrate on fault for device pages mpenttil
@ 2026-03-30 4:30 ` mpenttil
2026-03-30 6:20 ` Christoph Hellwig
2026-03-30 4:30 ` [PATCH v7 2/6] mm: Add helper to convert HMM pfn to migrate pfn mpenttil
` (4 subsequent siblings)
5 siblings, 1 reply; 15+ messages in thread
From: mpenttil @ 2026-03-30 4:30 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mika Penttilä, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko
From: Mika Penttilä <mpenttil@redhat.com>
With the unified HMM/migrate_device page table walk
migrate_device needs HMM enabled and HMM needs
MMU notifiers. Enable them explicitly to avoid
breaking random configs.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Mika Penttilä <mpenttil@redhat.com>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
---
mm/Kconfig | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index ebd8ea353687..583d92bba2e8 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -647,6 +647,7 @@ config MIGRATION
config DEVICE_MIGRATION
def_bool MIGRATION && ZONE_DEVICE
+ select HMM_MIRROR
config ARCH_ENABLE_HUGEPAGE_MIGRATION
bool
@@ -1222,6 +1223,7 @@ config ZONE_DEVICE
config HMM_MIRROR
bool
depends on MMU
+ select MMU_NOTIFIER
config GET_FREE_REGION
bool
--
2.50.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v7 2/6] mm: Add helper to convert HMM pfn to migrate pfn
2026-03-30 4:30 [PATCH v7 0/6] Migrate on fault for device pages mpenttil
2026-03-30 4:30 ` [PATCH v7 1/6] mm:/Kconfig changes for migrate " mpenttil
@ 2026-03-30 4:30 ` mpenttil
2026-03-30 4:30 ` [PATCH v7 3/6] mm/hmm: do the plumbing for HMM to participate in migration mpenttil
` (3 subsequent siblings)
5 siblings, 0 replies; 15+ messages in thread
From: mpenttil @ 2026-03-30 4:30 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mika Penttilä, David Hildenbrand,
Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh,
Zi Yan, Matthew Brost
From: Mika Penttilä <mpenttil@redhat.com>
The unified HMM/migrate_device pagewalk does the "collecting"
on the HMM side, so we need a helper to transfer pfns to the
migrate_vma world.
Cc: David Hildenbrand <david@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leonro@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Balbir Singh <balbirs@nvidia.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Suggested-by: Alistair Popple <apopple@nvidia.com>
Signed-off-by: Mika Penttilä <mpenttil@redhat.com>
---
include/linux/hmm.h | 18 ++++++++++++-
include/linux/migrate.h | 3 ++-
mm/migrate_device.c | 57 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 76 insertions(+), 2 deletions(-)
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index db75ffc949a7..9adc22b73533 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -13,6 +13,8 @@
struct mmu_interval_notifier;
+struct migrate_vma;
+
/*
* On output:
* 0 - The page is faultable and a future call with
@@ -27,6 +29,12 @@ struct mmu_interval_notifier;
* HMM_PFN_P2PDMA_BUS - Bus mapped P2P transfer
* HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation
* to mark that page is already DMA mapped
+ * HMM_PFN_MIGRATE - The entry is to be migrated. Note, HMM_PFN_MIGRATE
+ * alone without HMM_PFN_VALID denotes the
+ * empty page.
+ * This flag together with HMM_PFN_COMPOUND are
+ * indicators for migrate_hmm_range_setup() to
+ * setup the migrate pfns.
*
* On input:
* 0 - Return the current state of the page, do not fault it.
@@ -34,6 +42,8 @@ struct mmu_interval_notifier;
* will fail
* HMM_PFN_REQ_WRITE - The output must have HMM_PFN_WRITE or hmm_range_fault()
* will fail. Must be combined with HMM_PFN_REQ_FAULT.
+ * HMM_PFN_REQ_MIGRATE - For default_flags, request to migrate, according to
+ * hmm_range.migrate.flags
*/
enum hmm_pfn_flags {
/* Output fields and flags */
@@ -48,11 +58,15 @@ enum hmm_pfn_flags {
HMM_PFN_P2PDMA = 1UL << (BITS_PER_LONG - 5),
HMM_PFN_P2PDMA_BUS = 1UL << (BITS_PER_LONG - 6),
- HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 11),
+ /* Migrate request */
+ HMM_PFN_MIGRATE = 1UL << (BITS_PER_LONG - 7),
+ HMM_PFN_COMPOUND = 1UL << (BITS_PER_LONG - 8),
+ HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 13),
/* Input flags */
HMM_PFN_REQ_FAULT = HMM_PFN_VALID,
HMM_PFN_REQ_WRITE = HMM_PFN_WRITE,
+ HMM_PFN_REQ_MIGRATE = HMM_PFN_MIGRATE,
HMM_PFN_FLAGS = ~((1UL << HMM_PFN_ORDER_SHIFT) - 1),
};
@@ -107,6 +121,7 @@ static inline unsigned int hmm_pfn_to_map_order(unsigned long hmm_pfn)
* @default_flags: default flags for the range (write, read, ... see hmm doc)
* @pfn_flags_mask: allows to mask pfn flags so that only default_flags matter
* @dev_private_owner: owner of device private pages
+ * @migrate: structure for migrating a range of a VMA
*/
struct hmm_range {
struct mmu_interval_notifier *notifier;
@@ -117,6 +132,7 @@ struct hmm_range {
unsigned long default_flags;
unsigned long pfn_flags_mask;
void *dev_private_owner;
+ struct migrate_vma *migrate;
};
/*
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index d5af2b7f577b..425ab5242da0 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -3,6 +3,7 @@
#define _LINUX_MIGRATE_H
#include <linux/mm.h>
+#include <linux/hmm.h>
#include <linux/mempolicy.h>
#include <linux/migrate_mode.h>
#include <linux/hugetlb.h>
@@ -200,7 +201,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
unsigned long npages);
void migrate_device_finalize(unsigned long *src_pfns,
unsigned long *dst_pfns, unsigned long npages);
-
+void migrate_hmm_range_setup(struct hmm_range *range);
#endif /* CONFIG_MIGRATION */
#endif /* _LINUX_MIGRATE_H */
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 8079676c8f1f..a4062fd21490 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -1489,3 +1489,60 @@ int migrate_device_coherent_folio(struct folio *folio)
return 0;
return -EBUSY;
}
+
+/**
+ * migrate_hmm_range_setup() - prepare to migrate a range of memory
+ * @range: contains pointer to struct migrate_vma to be set up.
+ *
+ * When collecting has been done with hmm_range_fault(), this
+ * should be called next, and completes range->migrate by
+ * populating migrate->src[] and migrate->dst[]
+ * using range->hmm_pfns[].
+ * Also, migrate->cpages and migrate->npages get initialized.
+ * After migrate_hmm_range_setup(), range->migrate is good
+ * for the rest of the migrate_vma_* flow.
+ */
+void migrate_hmm_range_setup(struct hmm_range *range)
+{
+
+ struct migrate_vma *migrate = range->migrate;
+
+ if (!migrate)
+ return;
+
+ migrate->npages = (migrate->end - migrate->start) >> PAGE_SHIFT;
+ migrate->cpages = 0;
+
+ for (unsigned long i = 0; i < migrate->npages; i++) {
+
+ unsigned long pfn = range->hmm_pfns[i];
+
+ /*
+ * We are only interested in entries to be
+ * migrated.
+ */
+ if (!(pfn & HMM_PFN_MIGRATE)) {
+ migrate->src[i] = 0;
+ migrate->dst[i] = 0;
+ continue;
+ }
+
+ migrate->cpages++;
+
+ /* HMM_PFN_MIGRATE without HMM_PFN_VALID denotes the special zero page */
+ if (pfn & (HMM_PFN_VALID))
+ migrate->src[i] = migrate_pfn(page_to_pfn(hmm_pfn_to_page(pfn)))
+ | MIGRATE_PFN_MIGRATE;
+ else
+ migrate->src[i] = MIGRATE_PFN_MIGRATE;
+
+ migrate->src[i] |= (pfn & HMM_PFN_WRITE) ? MIGRATE_PFN_WRITE : 0;
+ migrate->src[i] |= (pfn & HMM_PFN_COMPOUND) ? MIGRATE_PFN_COMPOUND : 0;
+ migrate->dst[i] = 0;
+ }
+
+ if (migrate->cpages)
+ migrate_vma_unmap(migrate);
+
+}
+EXPORT_SYMBOL(migrate_hmm_range_setup);
--
2.50.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v7 3/6] mm/hmm: do the plumbing for HMM to participate in migration
2026-03-30 4:30 [PATCH v7 0/6] Migrate on fault for device pages mpenttil
2026-03-30 4:30 ` [PATCH v7 1/6] mm:/Kconfig changes for migrate " mpenttil
2026-03-30 4:30 ` [PATCH v7 2/6] mm: Add helper to convert HMM pfn to migrate pfn mpenttil
@ 2026-03-30 4:30 ` mpenttil
2026-03-30 11:05 ` kernel test robot
2026-03-30 11:05 ` kernel test robot
2026-03-30 4:30 ` [PATCH v7 4/6] mm: setup device page migration in HMM pagewalk mpenttil
` (2 subsequent siblings)
5 siblings, 2 replies; 15+ messages in thread
From: mpenttil @ 2026-03-30 4:30 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mika Penttilä, David Hildenbrand,
Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh,
Zi Yan, Matthew Brost
From: Mika Penttilä <mpenttil@redhat.com>
Do the preparations in hmm_range_fault() and pagewalk callbacks to
do the "collecting" part of migration, needed for migration
on fault.
These steps include locking for pmd/pte if migrating, capturing
the vma for further migrate actions, and calling the
still dummy hmm_vma_handle_migrate_prepare_pmd() and
hmm_vma_handle_migrate_prepare() functions in the pagewalk.
Cc: David Hildenbrand <david@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leonro@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Balbir Singh <balbirs@nvidia.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Suggested-by: Alistair Popple <apopple@nvidia.com>
Signed-off-by: Mika Penttilä <mpenttil@redhat.com>
---
include/linux/migrate.h | 17 +-
lib/test_hmm.c | 2 +-
mm/hmm.c | 423 +++++++++++++++++++++++++++++++++++-----
3 files changed, 387 insertions(+), 55 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 425ab5242da0..037e7430edb9 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -104,6 +104,15 @@ static inline void softleaf_entry_wait_on_locked(softleaf_t entry, spinlock_t *p
WARN_ON_ONCE(1);
spin_unlock(ptl);
+
+enum migrate_vma_info {
+ MIGRATE_VMA_SELECT_NONE = 0,
+ MIGRATE_VMA_SELECT_COMPOUND = MIGRATE_VMA_SELECT_NONE,
+};
+
+static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *range)
+{
+ return MIGRATE_VMA_SELECT_NONE;
}
#endif /* CONFIG_MIGRATION */
@@ -149,7 +158,7 @@ static inline unsigned long migrate_pfn(unsigned long pfn)
return (pfn << MIGRATE_PFN_SHIFT) | MIGRATE_PFN_VALID;
}
-enum migrate_vma_direction {
+enum migrate_vma_info {
MIGRATE_VMA_SELECT_SYSTEM = 1 << 0,
MIGRATE_VMA_SELECT_DEVICE_PRIVATE = 1 << 1,
MIGRATE_VMA_SELECT_DEVICE_COHERENT = 1 << 2,
@@ -191,6 +200,12 @@ struct migrate_vma {
struct page *fault_page;
};
+// TODO: enable migration
+static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *range)
+{
+ return 0;
+}
+
int migrate_vma_setup(struct migrate_vma *args);
void migrate_vma_pages(struct migrate_vma *migrate);
void migrate_vma_finalize(struct migrate_vma *migrate);
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 0964d53365e6..01aa0b60df2f 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -145,7 +145,7 @@ static bool dmirror_is_private_zone(struct dmirror_device *mdevice)
HMM_DMIRROR_MEMORY_DEVICE_PRIVATE);
}
-static enum migrate_vma_direction
+static enum migrate_vma_info
dmirror_select_device(struct dmirror *dmirror)
{
return (dmirror->mdevice->zone_device_type ==
diff --git a/mm/hmm.c b/mm/hmm.c
index 5955f2f0c83d..642593c3505f 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -20,6 +20,7 @@
#include <linux/pagemap.h>
#include <linux/leafops.h>
#include <linux/hugetlb.h>
+#include <linux/migrate.h>
#include <linux/memremap.h>
#include <linux/sched/mm.h>
#include <linux/jump_label.h>
@@ -27,14 +28,44 @@
#include <linux/pci-p2pdma.h>
#include <linux/mmu_notifier.h>
#include <linux/memory_hotplug.h>
+#include <asm/tlbflush.h>
#include "internal.h"
struct hmm_vma_walk {
- struct hmm_range *range;
- unsigned long last;
+ struct mmu_notifier_range mmu_range;
+ struct vm_area_struct *vma;
+ struct hmm_range *range;
+ unsigned long start;
+ unsigned long end;
+ unsigned long last;
+ /*
+ * For migration we need pte/pmd
+ * locked for the handle_* and
+ * prepare_* regions. While faulting
+ * we have to drop the locks and
+ * start again.
+ * ptelocked and pmdlocked
+ * hold the state and tells if need
+ * to drop locks before faulting.
+ * ptl is the lock held for pte or pmd.
+ *
+ */
+ bool ptelocked;
+ bool pmdlocked;
+ spinlock_t *ptl;
};
+#define HMM_ASSERT_PTE_LOCKED(hmm_vma_walk, locked) \
+ WARN_ON_ONCE(hmm_vma_walk->ptelocked != locked)
+
+#define HMM_ASSERT_PMD_LOCKED(hmm_vma_walk, locked) \
+ WARN_ON_ONCE(hmm_vma_walk->pmdlocked != locked)
+
+#define HMM_ASSERT_UNLOCKED(hmm_vma_walk) \
+ WARN_ON_ONCE(hmm_vma_walk->ptelocked || \
+ hmm_vma_walk->pmdlocked)
+
enum {
HMM_NEED_FAULT = 1 << 0,
HMM_NEED_WRITE_FAULT = 1 << 1,
@@ -48,14 +79,37 @@ enum {
};
static int hmm_pfns_fill(unsigned long addr, unsigned long end,
- struct hmm_range *range, unsigned long cpu_flags)
+ struct hmm_vma_walk *hmm_vma_walk, unsigned long cpu_flags)
{
+ struct hmm_range *range = hmm_vma_walk->range;
unsigned long i = (addr - range->start) >> PAGE_SHIFT;
+ enum migrate_vma_info minfo;
+ bool migrate = false;
+
+ minfo = hmm_select_migrate(range);
+ if (cpu_flags != HMM_PFN_ERROR) {
+ if (minfo && (vma_is_anonymous(hmm_vma_walk->vma))) {
+ cpu_flags |= HMM_PFN_MIGRATE;
+ migrate = true;
+ }
+ }
+
+ if (migrate && thp_migration_supported() &&
+ (minfo & MIGRATE_VMA_SELECT_COMPOUND) &&
+ IS_ALIGNED(addr, HPAGE_PMD_SIZE) &&
+ IS_ALIGNED(end, HPAGE_PMD_SIZE)) {
+ range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS;
+ range->hmm_pfns[i] |= cpu_flags | HMM_PFN_COMPOUND;
+ addr += PAGE_SIZE;
+ i++;
+ cpu_flags = 0;
+ }
for (; addr < end; addr += PAGE_SIZE, i++) {
range->hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS;
range->hmm_pfns[i] |= cpu_flags;
}
+
return 0;
}
@@ -78,6 +132,7 @@ static int hmm_vma_fault(unsigned long addr, unsigned long end,
unsigned int fault_flags = FAULT_FLAG_REMOTE;
WARN_ON_ONCE(!required_fault);
+ HMM_ASSERT_UNLOCKED(hmm_vma_walk);
hmm_vma_walk->last = addr;
if (required_fault & HMM_NEED_WRITE_FAULT) {
@@ -171,11 +226,11 @@ static int hmm_vma_walk_hole(unsigned long addr, unsigned long end,
if (!walk->vma) {
if (required_fault)
return -EFAULT;
- return hmm_pfns_fill(addr, end, range, HMM_PFN_ERROR);
+ return hmm_pfns_fill(addr, end, hmm_vma_walk, HMM_PFN_ERROR);
}
if (required_fault)
return hmm_vma_fault(addr, end, required_fault, walk);
- return hmm_pfns_fill(addr, end, range, 0);
+ return hmm_pfns_fill(addr, end, hmm_vma_walk, 0);
}
static inline unsigned long hmm_pfn_flags_order(unsigned long order)
@@ -208,8 +263,13 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
cpu_flags = pmd_to_hmm_pfn_flags(range, pmd);
required_fault =
hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, cpu_flags);
- if (required_fault)
+ if (required_fault) {
+ if (hmm_vma_walk->pmdlocked) {
+ spin_unlock(hmm_vma_walk->ptl);
+ hmm_vma_walk->pmdlocked = false;
+ }
return hmm_vma_fault(addr, end, required_fault, walk);
+ }
pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) {
@@ -289,14 +349,23 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
goto fault;
if (softleaf_is_migration(entry)) {
- pte_unmap(ptep);
- hmm_vma_walk->last = addr;
- migration_entry_wait(walk->mm, pmdp, addr);
- return -EBUSY;
+ if (!hmm_select_migrate(range)) {
+ HMM_ASSERT_UNLOCKED(hmm_vma_walk);
+ hmm_vma_walk->last = addr;
+ migration_entry_wait(walk->mm, pmdp, addr);
+ return -EBUSY;
+ } else
+ goto out;
}
/* Report error for everything else */
- pte_unmap(ptep);
+
+ if (hmm_vma_walk->ptelocked) {
+ pte_unmap_unlock(ptep, hmm_vma_walk->ptl);
+ hmm_vma_walk->ptelocked = false;
+ } else
+ pte_unmap(ptep);
+
return -EFAULT;
}
@@ -313,7 +382,12 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
if (!vm_normal_page(walk->vma, addr, pte) &&
!is_zero_pfn(pte_pfn(pte))) {
if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
- pte_unmap(ptep);
+ if (hmm_vma_walk->ptelocked) {
+ pte_unmap_unlock(ptep, hmm_vma_walk->ptl);
+ hmm_vma_walk->ptelocked = false;
+ } else
+ pte_unmap(ptep);
+
return -EFAULT;
}
new_pfn_flags = HMM_PFN_ERROR;
@@ -326,7 +400,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
return 0;
fault:
- pte_unmap(ptep);
+ if (hmm_vma_walk->ptelocked) {
+ pte_unmap_unlock(ptep, hmm_vma_walk->ptl);
+ hmm_vma_walk->ptelocked = false;
+ } else
+ pte_unmap(ptep);
/* Fault any virtual address we were asked to fault */
return hmm_vma_fault(addr, end, required_fault, walk);
}
@@ -370,13 +448,18 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start,
required_fault = hmm_range_need_fault(hmm_vma_walk, hmm_pfns,
npages, 0);
if (required_fault) {
- if (softleaf_is_device_private(entry))
+ if (softleaf_is_device_private(entry)) {
+ if (hmm_vma_walk->pmdlocked) {
+ spin_unlock(hmm_vma_walk->ptl);
+ hmm_vma_walk->pmdlocked = false;
+ }
return hmm_vma_fault(addr, end, required_fault, walk);
+ }
else
return -EFAULT;
}
- return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR);
+ return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR);
}
#else
static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start,
@@ -384,15 +467,100 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start,
pmd_t pmd)
{
struct hmm_vma_walk *hmm_vma_walk = walk->private;
- struct hmm_range *range = hmm_vma_walk->range;
unsigned long npages = (end - start) >> PAGE_SHIFT;
if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0))
return -EFAULT;
- return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR);
+ return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR);
}
#endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
+#ifdef CONFIG_DEVICE_MIGRATION
+static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk,
+ pmd_t *pmdp,
+ unsigned long start,
+ unsigned long end,
+ unsigned long *hmm_pfn)
+{
+ // TODO: implement migration entry insertion
+ return 0;
+}
+
+static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk,
+ pmd_t *pmdp,
+ pte_t *pte,
+ unsigned long addr,
+ unsigned long *hmm_pfn)
+{
+ // TODO: implement migration entry insertion
+ return 0;
+}
+
+static int hmm_vma_walk_split(pmd_t *pmdp,
+ unsigned long addr,
+ struct mm_walk *walk)
+{
+ // TODO : implement split
+ return 0;
+}
+
+#else
+static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk,
+ pmd_t *pmdp,
+ unsigned long start,
+ unsigned long end,
+ unsigned long *hmm_pfn)
+{
+ return 0;
+}
+
+static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk,
+ pmd_t *pmdp,
+ pte_t *pte,
+ unsigned long addr,
+ unsigned long *hmm_pfn)
+{
+ return 0;
+}
+
+static int hmm_vma_walk_split(pmd_t *pmdp,
+ unsigned long addr,
+ struct mm_walk *walk)
+{
+ return 0;
+}
+#endif
+
+static int hmm_vma_capture_migrate_range(unsigned long start,
+ unsigned long end,
+ struct mm_walk *walk)
+{
+ struct hmm_vma_walk *hmm_vma_walk = walk->private;
+ struct hmm_range *range = hmm_vma_walk->range;
+
+ if (!hmm_select_migrate(range))
+ return 0;
+
+ if (hmm_vma_walk->vma && (hmm_vma_walk->vma != walk->vma))
+ return -ERANGE;
+
+ hmm_vma_walk->vma = walk->vma;
+ hmm_vma_walk->start = start;
+ hmm_vma_walk->end = end;
+
+ if (end - start > range->end - range->start)
+ return -ERANGE;
+
+ if (!hmm_vma_walk->mmu_range.owner) {
+ mmu_notifier_range_init_owner(&hmm_vma_walk->mmu_range, MMU_NOTIFY_MIGRATE, 0,
+ walk->vma->vm_mm, start, end,
+ range->dev_private_owner);
+ mmu_notifier_invalidate_range_start(&hmm_vma_walk->mmu_range);
+ }
+
+ return 0;
+}
+
static int hmm_vma_walk_pmd(pmd_t *pmdp,
unsigned long start,
unsigned long end,
@@ -400,46 +568,130 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
{
struct hmm_vma_walk *hmm_vma_walk = walk->private;
struct hmm_range *range = hmm_vma_walk->range;
- unsigned long *hmm_pfns =
- &range->hmm_pfns[(start - range->start) >> PAGE_SHIFT];
unsigned long npages = (end - start) >> PAGE_SHIFT;
+ struct mm_struct *mm = walk->vma->vm_mm;
+ enum migrate_vma_info minfo;
unsigned long addr = start;
+ unsigned long *hmm_pfns;
+ unsigned long i;
pte_t *ptep;
pmd_t pmd;
+ int r = 0;
+
+ minfo = hmm_select_migrate(range);
again:
- pmd = pmdp_get_lockless(pmdp);
- if (pmd_none(pmd))
- return hmm_vma_walk_hole(start, end, -1, walk);
+ hmm_pfns = &range->hmm_pfns[(addr - range->start) >> PAGE_SHIFT];
+ hmm_vma_walk->ptelocked = false;
+ hmm_vma_walk->pmdlocked = false;
+
+ if (minfo) {
+ hmm_vma_walk->ptl = pmd_lock(mm, pmdp);
+ hmm_vma_walk->pmdlocked = true;
+ pmd = pmdp_get(pmdp);
+ } else
+ pmd = pmdp_get_lockless(pmdp);
+
+ if (pmd_none(pmd)) {
+ r = hmm_vma_walk_hole(start, end, -1, walk);
+
+ if (hmm_vma_walk->pmdlocked) {
+ spin_unlock(hmm_vma_walk->ptl);
+ hmm_vma_walk->pmdlocked = false;
+ }
+ return r;
+ }
if (thp_migration_supported() && pmd_is_migration_entry(pmd)) {
- if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) {
+ if (!minfo) {
+ if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) {
+ hmm_vma_walk->last = addr;
+ pmd_migration_entry_wait(walk->mm, pmdp);
+ return -EBUSY;
+ }
+ }
+ for (i = 0; addr < end; addr += PAGE_SIZE, i++)
+ hmm_pfns[i] &= HMM_PFN_INOUT_FLAGS;
+
+ if (hmm_vma_walk->pmdlocked) {
+ spin_unlock(hmm_vma_walk->ptl);
+ hmm_vma_walk->pmdlocked = false;
+ }
+
+ return 0;
+ }
+
+ if (pmd_trans_huge(pmd) || !pmd_present(pmd)) {
+
+ if (!pmd_present(pmd)) {
+ r = hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns,
+ pmd);
+ // If not migrating we are done
+ if (r || !minfo) {
+ if (hmm_vma_walk->pmdlocked) {
+ spin_unlock(hmm_vma_walk->ptl);
+ hmm_vma_walk->pmdlocked = false;
+ }
+ return r;
+ }
+ }
+
+ if (pmd_trans_huge(pmd)) {
+
+ /*
+ * No need to take pmd_lock here if not migrating,
+ * even if some other thread is splitting the huge
+ * pmd we will get that event through mmu_notifier callback.
+ *
+ * So just read pmd value and check again it's a transparent
+ * huge or device mapping one and compute corresponding pfn
+ * values.
+ */
+
+ if (!minfo) {
+ pmd = pmdp_get_lockless(pmdp);
+ if (!pmd_trans_huge(pmd))
+ goto again;
+ }
+
+ r = hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd);
+
+ // If not migrating we are done
+ if (r || !minfo) {
+ if (hmm_vma_walk->pmdlocked) {
+ spin_unlock(hmm_vma_walk->ptl);
+ hmm_vma_walk->pmdlocked = false;
+ }
+ return r;
+ }
+ }
+
+ r = hmm_vma_handle_migrate_prepare_pmd(walk, pmdp, start, end, hmm_pfns);
+
+ if (hmm_vma_walk->pmdlocked) {
+ spin_unlock(hmm_vma_walk->ptl);
+ hmm_vma_walk->pmdlocked = false;
+ }
+
+ if (r == -ENOENT) {
+ r = hmm_vma_walk_split(pmdp, addr, walk);
+ if (r) {
+ /* Split not successful, skip */
+ return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR);
+ }
+
+ /* Split successful, reloop */
hmm_vma_walk->last = addr;
- pmd_migration_entry_wait(walk->mm, pmdp);
return -EBUSY;
}
- return hmm_pfns_fill(start, end, range, 0);
- }
- if (!pmd_present(pmd))
- return hmm_vma_handle_absent_pmd(walk, start, end, hmm_pfns,
- pmd);
+ return r;
- if (pmd_trans_huge(pmd)) {
- /*
- * No need to take pmd_lock here, even if some other thread
- * is splitting the huge pmd we will get that event through
- * mmu_notifier callback.
- *
- * So just read pmd value and check again it's a transparent
- * huge or device mapping one and compute corresponding pfn
- * values.
- */
- pmd = pmdp_get_lockless(pmdp);
- if (!pmd_trans_huge(pmd))
- goto again;
+ }
- return hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd);
+ if (hmm_vma_walk->pmdlocked) {
+ spin_unlock(hmm_vma_walk->ptl);
+ hmm_vma_walk->pmdlocked = false;
}
/*
@@ -451,22 +703,43 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
if (pmd_bad(pmd)) {
if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0))
return -EFAULT;
- return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR);
+ return hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR);
}
- ptep = pte_offset_map(pmdp, addr);
+ if (minfo) {
+ ptep = pte_offset_map_lock(mm, pmdp, addr, &hmm_vma_walk->ptl);
+ if (ptep)
+ hmm_vma_walk->ptelocked = true;
+ } else
+ ptep = pte_offset_map(pmdp, addr);
if (!ptep)
goto again;
+
for (; addr < end; addr += PAGE_SIZE, ptep++, hmm_pfns++) {
- int r;
r = hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, hmm_pfns);
if (r) {
- /* hmm_vma_handle_pte() did pte_unmap() */
+ /* hmm_vma_handle_pte() did pte_unmap() / pte_unmap_unlock */
return r;
}
+
+ r = hmm_vma_handle_migrate_prepare(walk, pmdp, ptep, addr, hmm_pfns);
+ if (r == -EAGAIN) {
+ HMM_ASSERT_UNLOCKED(hmm_vma_walk);
+ goto again;
+ }
+ if (r) {
+ hmm_pfns_fill(addr, end, hmm_vma_walk, HMM_PFN_ERROR);
+ break;
+ }
}
- pte_unmap(ptep - 1);
+
+ if (hmm_vma_walk->ptelocked) {
+ pte_unmap_unlock(ptep - 1, hmm_vma_walk->ptl);
+ hmm_vma_walk->ptelocked = false;
+ } else
+ pte_unmap(ptep - 1);
+
return 0;
}
@@ -600,6 +873,11 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
struct hmm_vma_walk *hmm_vma_walk = walk->private;
struct hmm_range *range = hmm_vma_walk->range;
struct vm_area_struct *vma = walk->vma;
+ int r;
+
+ r = hmm_vma_capture_migrate_range(start, end, walk);
+ if (r)
+ return r;
if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)) &&
vma->vm_flags & VM_READ)
@@ -622,7 +900,7 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
(end - start) >> PAGE_SHIFT, 0))
return -EFAULT;
- hmm_pfns_fill(start, end, range, HMM_PFN_ERROR);
+ hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR);
/* Skip this vma and continue processing the next vma. */
return 1;
@@ -652,9 +930,17 @@ static const struct mm_walk_ops hmm_walk_ops = {
* the invalidation to finish.
* -EFAULT: A page was requested to be valid and could not be made valid
* ie it has no backing VMA or it is illegal to access
+ * -ERANGE: The range crosses multiple VMAs, or space for hmm_pfns array
+ * is too low.
*
* This is similar to get_user_pages(), except that it can read the page tables
* without mutating them (ie causing faults).
+ *
+ * If want to do migrate after faulting, call hmm_range_fault() with
+ * HMM_PFN_REQ_MIGRATE and initialize range.migrate field.
+ * After hmm_range_fault() call migrate_hmm_range_setup() instead of
+ * migrate_vma_setup() and after that follow normal migrate calls path.
+ *
*/
int hmm_range_fault(struct hmm_range *range)
{
@@ -662,16 +948,34 @@ int hmm_range_fault(struct hmm_range *range)
.range = range,
.last = range->start,
};
- struct mm_struct *mm = range->notifier->mm;
+ struct mm_struct *mm;
+ bool is_fault_path;
int ret;
+ /*
+ *
+ * Could be serving a device fault or come from migrate
+ * entry point. For the former we have not resolved the vma
+ * yet, and the latter we don't have a notifier (but have a vma).
+ *
+ */
+#ifdef CONFIG_DEVICE_MIGRATION
+ is_fault_path = !!range->notifier;
+ mm = is_fault_path ? range->notifier->mm : range->migrate->vma->vm_mm;
+#else
+ is_fault_path = true;
+ mm = range->notifier->mm;
+#endif
mmap_assert_locked(mm);
do {
/* If range is no longer valid force retry. */
- if (mmu_interval_check_retry(range->notifier,
- range->notifier_seq))
- return -EBUSY;
+ if (is_fault_path && mmu_interval_check_retry(range->notifier,
+ range->notifier_seq)) {
+ ret = -EBUSY;
+ break;
+ }
+
ret = walk_page_range(mm, hmm_vma_walk.last, range->end,
&hmm_walk_ops, &hmm_vma_walk);
/*
@@ -681,6 +985,19 @@ int hmm_range_fault(struct hmm_range *range)
* output, and all >= are still at their input values.
*/
} while (ret == -EBUSY);
+
+#ifdef CONFIG_DEVICE_MIGRATION
+ if (hmm_select_migrate(range) && range->migrate &&
+ hmm_vma_walk.mmu_range.owner) {
+ // The migrate_vma path has the following initialized
+ if (is_fault_path) {
+ range->migrate->vma = hmm_vma_walk.vma;
+ range->migrate->start = range->start;
+ range->migrate->end = hmm_vma_walk.end;
+ }
+ mmu_notifier_invalidate_range_end(&hmm_vma_walk.mmu_range);
+ }
+#endif
return ret;
}
EXPORT_SYMBOL(hmm_range_fault);
--
2.50.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v7 4/6] mm: setup device page migration in HMM pagewalk
2026-03-30 4:30 [PATCH v7 0/6] Migrate on fault for device pages mpenttil
` (2 preceding siblings ...)
2026-03-30 4:30 ` [PATCH v7 3/6] mm/hmm: do the plumbing for HMM to participate in migration mpenttil
@ 2026-03-30 4:30 ` mpenttil
2026-03-30 4:30 ` [PATCH v7 5/6] mm: add new testcase for the migrate on fault case mpenttil
2026-03-30 4:30 ` [PATCH v7 6/6] mm:/migrate_device.c: remove migrate_vma_collect_*() mpenttil
5 siblings, 0 replies; 15+ messages in thread
From: mpenttil @ 2026-03-30 4:30 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mika Penttilä, David Hildenbrand,
Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh,
Zi Yan, Matthew Brost
From: Mika Penttilä <mpenttil@redhat.com>
Implement the needed hmm_vma_handle_migrate_prepare_pmd() and
hmm_vma_handle_migrate_prepare() functions which are mostly
carried over from migrate_device.c, as well as the needed
split functions.
Make migrate_device take use of HMM pagewalk for collecting
part of migration.
Cc: David Hildenbrand <david@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leonro@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Balbir Singh <balbirs@nvidia.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Suggested-by: Alistair Popple <apopple@nvidia.com>
Signed-off-by: Mika Penttilä <mpenttil@redhat.com>
---
include/linux/migrate.h | 9 +-
mm/hmm.c | 420 ++++++++++++++++++++++++++++++++++++++--
mm/migrate_device.c | 26 ++-
3 files changed, 438 insertions(+), 17 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 037e7430edb9..9e1081847d1f 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -163,6 +163,7 @@ enum migrate_vma_info {
MIGRATE_VMA_SELECT_DEVICE_PRIVATE = 1 << 1,
MIGRATE_VMA_SELECT_DEVICE_COHERENT = 1 << 2,
MIGRATE_VMA_SELECT_COMPOUND = 1 << 3,
+ MIGRATE_VMA_FAULT = 1 << 4,
};
struct migrate_vma {
@@ -200,10 +201,14 @@ struct migrate_vma {
struct page *fault_page;
};
-// TODO: enable migration
static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *range)
{
- return 0;
+ enum migrate_vma_info minfo;
+
+ minfo = (range->default_flags & HMM_PFN_REQ_MIGRATE) ?
+ range->migrate->flags : 0;
+
+ return minfo;
}
int migrate_vma_setup(struct migrate_vma *args);
diff --git a/mm/hmm.c b/mm/hmm.c
index 642593c3505f..ce693938e5dc 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -476,34 +476,424 @@ static int hmm_vma_handle_absent_pmd(struct mm_walk *walk, unsigned long start,
#endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
#ifdef CONFIG_DEVICE_MIGRATION
+/**
+ * migrate_vma_split_folio() - Helper function to split a THP folio
+ * @folio: the folio to split
+ * @fault_page: struct page associated with the fault if any
+ *
+ * Returns 0 on success
+ */
+static int migrate_vma_split_folio(struct folio *folio,
+ struct page *fault_page)
+{
+ int ret;
+ struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL;
+ struct folio *new_fault_folio = NULL;
+
+ if (folio != fault_folio) {
+ folio_get(folio);
+ folio_lock(folio);
+ }
+
+ ret = split_folio(folio);
+ if (ret) {
+ if (folio != fault_folio) {
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+ return ret;
+ }
+
+ new_fault_folio = fault_page ? page_folio(fault_page) : NULL;
+
+ /*
+ * Ensure the lock is held on the correct
+ * folio after the split
+ */
+ if (!new_fault_folio) {
+ folio_unlock(folio);
+ folio_put(folio);
+ } else if (folio != new_fault_folio) {
+ if (new_fault_folio != fault_folio) {
+ folio_get(new_fault_folio);
+ folio_lock(new_fault_folio);
+ }
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+
+ return 0;
+}
+
static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk,
pmd_t *pmdp,
unsigned long start,
unsigned long end,
unsigned long *hmm_pfn)
{
- // TODO: implement migration entry insertion
- return 0;
+ struct hmm_vma_walk *hmm_vma_walk = walk->private;
+ struct hmm_range *range = hmm_vma_walk->range;
+ struct migrate_vma *migrate = range->migrate;
+ struct folio *fault_folio = NULL;
+ struct folio *folio;
+ enum migrate_vma_info minfo;
+ unsigned long i;
+ int r = 0;
+
+ minfo = hmm_select_migrate(range);
+ if (!minfo)
+ return r;
+
+ WARN_ON_ONCE(!migrate);
+ HMM_ASSERT_PMD_LOCKED(hmm_vma_walk, true);
+
+ fault_folio = migrate->fault_page ?
+ page_folio(migrate->fault_page) : NULL;
+
+ if (pmd_none(*pmdp))
+ return hmm_pfns_fill(start, end, hmm_vma_walk, 0);
+
+ if (!(hmm_pfn[0] & HMM_PFN_VALID))
+ goto out;
+
+ if (pmd_trans_huge(*pmdp)) {
+ if (!(minfo & MIGRATE_VMA_SELECT_SYSTEM))
+ goto out;
+
+ folio = pmd_folio(*pmdp);
+ if (is_huge_zero_folio(folio))
+ return hmm_pfns_fill(start, end, hmm_vma_walk, 0);
+
+ } else if (!pmd_present(*pmdp)) {
+ const softleaf_t entry = softleaf_from_pmd(*pmdp);
+
+ folio = softleaf_to_folio(entry);
+
+ if (!softleaf_is_device_private(entry))
+ goto out;
+
+ if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE))
+ goto out;
+
+ if (folio->pgmap->owner != migrate->pgmap_owner)
+ goto out;
+
+ } else {
+ hmm_vma_walk->last = start;
+ return -EBUSY;
+ }
+
+ folio_get(folio);
+
+ if (folio != fault_folio && unlikely(!folio_trylock(folio))) {
+ folio_put(folio);
+ hmm_pfns_fill(start, end, hmm_vma_walk, HMM_PFN_ERROR);
+ return 0;
+ }
+
+ if (thp_migration_supported() &&
+ (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) &&
+ (IS_ALIGNED(start, HPAGE_PMD_SIZE) &&
+ IS_ALIGNED(end, HPAGE_PMD_SIZE))) {
+
+ struct page_vma_mapped_walk pvmw = {
+ .ptl = hmm_vma_walk->ptl,
+ .address = start,
+ .pmd = pmdp,
+ .vma = walk->vma,
+ };
+
+ hmm_pfn[0] |= HMM_PFN_MIGRATE | HMM_PFN_COMPOUND;
+
+ r = set_pmd_migration_entry(&pvmw, folio_page(folio, 0));
+ if (r) {
+ hmm_pfn[0] &= ~(HMM_PFN_MIGRATE | HMM_PFN_COMPOUND);
+ r = -ENOENT; // fallback
+ goto unlock_out;
+ }
+ for (i = 1, start += PAGE_SIZE; start < end; start += PAGE_SIZE, i++)
+ hmm_pfn[i] &= HMM_PFN_INOUT_FLAGS;
+
+ } else {
+ r = -ENOENT; // fallback
+ goto unlock_out;
+ }
+
+
+out:
+ return r;
+
+unlock_out:
+ if (folio != fault_folio)
+ folio_unlock(folio);
+ folio_put(folio);
+ goto out;
}
+/*
+ * Install migration entries if migration requested, either from fault
+ * or migrate paths.
+ *
+ */
static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk,
pmd_t *pmdp,
- pte_t *pte,
+ pte_t *ptep,
unsigned long addr,
- unsigned long *hmm_pfn)
+ unsigned long *hmm_pfn,
+ bool *unmapped)
{
- // TODO: implement migration entry insertion
+ struct hmm_vma_walk *hmm_vma_walk = walk->private;
+ struct hmm_range *range = hmm_vma_walk->range;
+ struct migrate_vma *migrate = range->migrate;
+ struct mm_struct *mm = walk->vma->vm_mm;
+ struct folio *fault_folio = NULL;
+ enum migrate_vma_info minfo;
+ struct dev_pagemap *pgmap;
+ bool anon_exclusive;
+ struct folio *folio;
+ unsigned long pfn;
+ struct page *page;
+ softleaf_t entry;
+ pte_t pte, swp_pte;
+ bool writable = false;
+
+ // Do we want to migrate at all?
+ minfo = hmm_select_migrate(range);
+ if (!minfo)
+ return 0;
+
+ WARN_ON_ONCE(!migrate);
+ HMM_ASSERT_PTE_LOCKED(hmm_vma_walk, true);
+
+ fault_folio = migrate->fault_page ?
+ page_folio(migrate->fault_page) : NULL;
+
+ pte = ptep_get(ptep);
+
+ if (pte_none(pte)) {
+ // migrate without faulting case
+ if (vma_is_anonymous(walk->vma)) {
+ *hmm_pfn &= HMM_PFN_INOUT_FLAGS;
+ *hmm_pfn |= HMM_PFN_MIGRATE;
+ goto out;
+ }
+ }
+
+ if (!(hmm_pfn[0] & HMM_PFN_VALID))
+ goto out;
+
+ if (!pte_present(pte)) {
+ /*
+ * Only care about unaddressable device page special
+ * page table entry. Other special swap entries are not
+ * migratable, and we ignore regular swapped page.
+ */
+ entry = softleaf_from_pte(pte);
+ if (!softleaf_is_device_private(entry))
+ goto out;
+
+ if (!(minfo & MIGRATE_VMA_SELECT_DEVICE_PRIVATE))
+ goto out;
+
+ page = softleaf_to_page(entry);
+ folio = page_folio(page);
+ if (folio->pgmap->owner != migrate->pgmap_owner)
+ goto out;
+
+ if (folio_test_large(folio)) {
+ int ret;
+
+ pte_unmap_unlock(ptep, hmm_vma_walk->ptl);
+ hmm_vma_walk->ptelocked = false;
+ ret = migrate_vma_split_folio(folio,
+ migrate->fault_page);
+ if (ret)
+ goto out_error;
+ return -EAGAIN;
+ }
+
+ pfn = page_to_pfn(page);
+ if (softleaf_is_device_private_write(entry))
+ writable = true;
+ } else {
+ pfn = pte_pfn(pte);
+ if (is_zero_pfn(pfn) &&
+ (minfo & MIGRATE_VMA_SELECT_SYSTEM)) {
+ *hmm_pfn = HMM_PFN_MIGRATE;
+ goto out;
+ }
+ page = vm_normal_page(walk->vma, addr, pte);
+ if (page && !is_zone_device_page(page) &&
+ !(minfo & MIGRATE_VMA_SELECT_SYSTEM)) {
+ goto out;
+ } else if (page && is_device_coherent_page(page)) {
+ pgmap = page_pgmap(page);
+
+ if (!(minfo &
+ MIGRATE_VMA_SELECT_DEVICE_COHERENT) ||
+ pgmap->owner != migrate->pgmap_owner)
+ goto out;
+ }
+
+ folio = page ? page_folio(page) : NULL;
+ if (folio && folio_test_large(folio)) {
+ int ret;
+
+ pte_unmap_unlock(ptep, hmm_vma_walk->ptl);
+ hmm_vma_walk->ptelocked = false;
+
+ ret = migrate_vma_split_folio(folio,
+ migrate->fault_page);
+ if (ret)
+ goto out_error;
+ return -EAGAIN;
+ }
+
+ writable = pte_write(pte);
+ }
+
+ if (!page || !page->mapping)
+ goto out;
+
+ /*
+ * By getting a reference on the folio we pin it and that blocks
+ * any kind of migration. Side effect is that it "freezes" the
+ * pte.
+ *
+ * We drop this reference after isolating the folio from the lru
+ * for non device folio (device folio are not on the lru and thus
+ * can't be dropped from it).
+ */
+ folio = page_folio(page);
+ folio_get(folio);
+
+ /*
+ * We rely on folio_trylock() to avoid deadlock between
+ * concurrent migrations where each is waiting on the others
+ * folio lock. If we can't immediately lock the folio we fail this
+ * migration as it is only best effort anyway.
+ *
+ * If we can lock the folio it's safe to set up a migration entry
+ * now. In the common case where the folio is mapped once in a
+ * single process setting up the migration entry now is an
+ * optimisation to avoid walking the rmap later with
+ * try_to_migrate().
+ */
+
+ if (fault_folio == folio || folio_trylock(folio)) {
+ anon_exclusive = folio_test_anon(folio) &&
+ PageAnonExclusive(page);
+
+ flush_cache_page(walk->vma, addr, pfn);
+
+ if (anon_exclusive) {
+ pte = ptep_clear_flush(walk->vma, addr, ptep);
+
+ if (folio_try_share_anon_rmap_pte(folio, page)) {
+ set_pte_at(mm, addr, ptep, pte);
+ folio_unlock(folio);
+ folio_put(folio);
+ goto out;
+ }
+ } else {
+ pte = ptep_get_and_clear(mm, addr, ptep);
+ }
+
+ if (pte_dirty(pte))
+ folio_mark_dirty(folio);
+
+ /* Setup special migration page table entry */
+ if (writable)
+ entry = make_writable_migration_entry(pfn);
+ else if (anon_exclusive)
+ entry = make_readable_exclusive_migration_entry(pfn);
+ else
+ entry = make_readable_migration_entry(pfn);
+
+ if (pte_present(pte)) {
+ if (pte_young(pte))
+ entry = make_migration_entry_young(entry);
+ if (pte_dirty(pte))
+ entry = make_migration_entry_dirty(entry);
+ }
+
+ swp_pte = swp_entry_to_pte(entry);
+ if (pte_present(pte)) {
+ if (pte_soft_dirty(pte))
+ swp_pte = pte_swp_mksoft_dirty(swp_pte);
+ if (pte_uffd_wp(pte))
+ swp_pte = pte_swp_mkuffd_wp(swp_pte);
+ } else {
+ if (pte_swp_soft_dirty(pte))
+ swp_pte = pte_swp_mksoft_dirty(swp_pte);
+ if (pte_swp_uffd_wp(pte))
+ swp_pte = pte_swp_mkuffd_wp(swp_pte);
+ }
+
+ set_pte_at(mm, addr, ptep, swp_pte);
+ folio_remove_rmap_pte(folio, page, walk->vma);
+ folio_put(folio);
+ *hmm_pfn |= HMM_PFN_MIGRATE;
+
+ if (pte_present(pte))
+ *unmapped = true;
+ } else
+ folio_put(folio);
+out:
return 0;
+out_error:
+ return -EFAULT;
}
static int hmm_vma_walk_split(pmd_t *pmdp,
unsigned long addr,
struct mm_walk *walk)
{
- // TODO : implement split
- return 0;
-}
+ struct hmm_vma_walk *hmm_vma_walk = walk->private;
+ struct hmm_range *range = hmm_vma_walk->range;
+ struct migrate_vma *migrate = range->migrate;
+ struct folio *folio, *fault_folio;
+ spinlock_t *ptl;
+ int ret = 0;
+ HMM_ASSERT_UNLOCKED(hmm_vma_walk);
+
+ fault_folio = (migrate && migrate->fault_page) ?
+ page_folio(migrate->fault_page) : NULL;
+
+ ptl = pmd_lock(walk->mm, pmdp);
+ if (unlikely(!pmd_trans_huge(*pmdp))) {
+ spin_unlock(ptl);
+ goto out;
+ }
+
+ folio = pmd_folio(*pmdp);
+ if (is_huge_zero_folio(folio)) {
+ spin_unlock(ptl);
+ split_huge_pmd(walk->vma, pmdp, addr);
+ } else {
+ folio_get(folio);
+ spin_unlock(ptl);
+
+ if (folio != fault_folio) {
+ if (unlikely(!folio_trylock(folio))) {
+ folio_put(folio);
+ ret = -EBUSY;
+ goto out;
+ }
+ } else
+ folio_put(folio);
+
+ ret = split_folio(folio);
+ if (fault_folio != folio) {
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+
+ }
+out:
+ return ret;
+}
#else
static int hmm_vma_handle_migrate_prepare_pmd(const struct mm_walk *walk,
pmd_t *pmdp,
@@ -518,7 +908,8 @@ static int hmm_vma_handle_migrate_prepare(const struct mm_walk *walk,
pmd_t *pmdp,
pte_t *pte,
unsigned long addr,
- unsigned long *hmm_pfn)
+ unsigned long *hmm_pfn,
+ bool *unmapped)
{
return 0;
}
@@ -573,6 +964,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
enum migrate_vma_info minfo;
unsigned long addr = start;
unsigned long *hmm_pfns;
+ bool unmapped = false;
unsigned long i;
pte_t *ptep;
pmd_t pmd;
@@ -654,7 +1046,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
goto again;
}
- r = hmm_vma_handle_pmd(walk, addr, end, hmm_pfns, pmd);
+ r = hmm_vma_handle_pmd(walk, start, end, hmm_pfns, pmd);
// If not migrating we are done
if (r || !minfo) {
@@ -723,9 +1115,13 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
return r;
}
- r = hmm_vma_handle_migrate_prepare(walk, pmdp, ptep, addr, hmm_pfns);
+ r = hmm_vma_handle_migrate_prepare(walk, pmdp, ptep, addr, hmm_pfns, &unmapped);
if (r == -EAGAIN) {
HMM_ASSERT_UNLOCKED(hmm_vma_walk);
+ if (unmapped) {
+ flush_tlb_range(walk->vma, start, addr);
+ unmapped = false;
+ }
goto again;
}
if (r) {
@@ -733,6 +1129,8 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
break;
}
}
+ if (unmapped)
+ flush_tlb_range(walk->vma, start, addr);
if (hmm_vma_walk->ptelocked) {
pte_unmap_unlock(ptep - 1, hmm_vma_walk->ptl);
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index a4062fd21490..7ca5dc80d39b 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -734,7 +734,17 @@ static void migrate_vma_unmap(struct migrate_vma *migrate)
*/
int migrate_vma_setup(struct migrate_vma *args)
{
+ int ret;
long nr_pages = (args->end - args->start) >> PAGE_SHIFT;
+ struct hmm_range range = {
+ .notifier = NULL,
+ .start = args->start,
+ .end = args->end,
+ .hmm_pfns = args->src,
+ .dev_private_owner = args->pgmap_owner,
+ .migrate = args,
+ .default_flags = HMM_PFN_REQ_MIGRATE
+ };
args->start &= PAGE_MASK;
args->end &= PAGE_MASK;
@@ -759,17 +769,25 @@ int migrate_vma_setup(struct migrate_vma *args)
args->cpages = 0;
args->npages = 0;
- migrate_vma_collect(args);
+ if (args->flags & MIGRATE_VMA_FAULT)
+ range.default_flags |= HMM_PFN_REQ_FAULT;
+
+ ret = hmm_range_fault(&range);
- if (args->cpages)
- migrate_vma_unmap(args);
+ migrate_hmm_range_setup(&range);
+
+ /* Remove migration PTEs */
+ if (ret) {
+ migrate_vma_pages(args);
+ migrate_vma_finalize(args);
+ }
/*
* At this point pages are locked and unmapped, and thus they have
* stable content and can safely be copied to destination memory that
* is allocated by the drivers.
*/
- return 0;
+ return ret;
}
EXPORT_SYMBOL(migrate_vma_setup);
--
2.50.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v7 5/6] mm: add new testcase for the migrate on fault case
2026-03-30 4:30 [PATCH v7 0/6] Migrate on fault for device pages mpenttil
` (3 preceding siblings ...)
2026-03-30 4:30 ` [PATCH v7 4/6] mm: setup device page migration in HMM pagewalk mpenttil
@ 2026-03-30 4:30 ` mpenttil
2026-03-30 6:21 ` Christoph Hellwig
2026-03-30 4:30 ` [PATCH v7 6/6] mm:/migrate_device.c: remove migrate_vma_collect_*() mpenttil
5 siblings, 1 reply; 15+ messages in thread
From: mpenttil @ 2026-03-30 4:30 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mika Penttilä, David Hildenbrand,
Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh,
Zi Yan, Matthew Brost, Marco Pagani
From: Mika Penttilä <mpenttil@redhat.com>
Cc: David Hildenbrand <david@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leonro@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Balbir Singh <balbirs@nvidia.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Marco Pagani <marpagan@redhat.com>
Signed-off-by: Mika Penttilä <mpenttil@redhat.com>
---
lib/test_hmm.c | 99 ++++++++++++++++++++++++++
lib/test_hmm_uapi.h | 19 ++---
tools/testing/selftests/mm/hmm-tests.c | 54 ++++++++++++++
3 files changed, 163 insertions(+), 9 deletions(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 01aa0b60df2f..5ddcec056dfb 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -36,6 +36,7 @@
#define DMIRROR_RANGE_FAULT_TIMEOUT 1000
#define DEVMEM_CHUNK_SIZE (256 * 1024 * 1024U)
#define DEVMEM_CHUNKS_RESERVE 16
+#define PFNS_ARRAY_SIZE 64
/*
* For device_private pages, dpage is just a dummy struct page
@@ -1196,6 +1197,100 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
return ret;
}
+static int do_fault_and_migrate(struct dmirror *dmirror, struct hmm_range *range)
+{
+ struct migrate_vma *migrate = range->migrate;
+ int ret;
+
+ mmap_read_lock(dmirror->notifier.mm);
+
+ /* Fault-in pages for migration and update device page table */
+ ret = dmirror_range_fault(dmirror, range);
+
+ pr_debug("Migrating from sys mem to device mem\n");
+ migrate_hmm_range_setup(range);
+
+ dmirror_migrate_alloc_and_copy(migrate, dmirror);
+ migrate_vma_pages(migrate);
+ dmirror_migrate_finalize_and_map(migrate, dmirror);
+ migrate_vma_finalize(migrate);
+
+ mmap_read_unlock(dmirror->notifier.mm);
+ return ret;
+}
+
+static int dmirror_fault_and_migrate_to_device(struct dmirror *dmirror,
+ struct hmm_dmirror_cmd *cmd)
+{
+ unsigned long start, size, end, next;
+ unsigned long src_pfns[PFNS_ARRAY_SIZE] = { 0 };
+ unsigned long dst_pfns[PFNS_ARRAY_SIZE] = { 0 };
+ struct migrate_vma migrate = { 0 };
+ struct hmm_range range = { 0 };
+ struct dmirror_bounce bounce;
+ int ret = 0;
+
+ /* Whole range */
+ start = cmd->addr;
+ size = cmd->npages << PAGE_SHIFT;
+ end = start + size;
+
+ if (!mmget_not_zero(dmirror->notifier.mm)) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ migrate.pgmap_owner = dmirror->mdevice;
+ migrate.src = src_pfns;
+ migrate.dst = dst_pfns;
+ migrate.flags = MIGRATE_VMA_SELECT_SYSTEM;
+
+ range.migrate = &migrate;
+ range.hmm_pfns = src_pfns;
+ range.pfn_flags_mask = 0;
+ range.default_flags = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_MIGRATE;
+ range.dev_private_owner = dmirror->mdevice;
+ range.notifier = &dmirror->notifier;
+
+ for (next = start; next < end; next = range.end) {
+ range.start = next;
+ range.end = min(end, next + (PFNS_ARRAY_SIZE << PAGE_SHIFT));
+
+ pr_debug("Fault and migrate range start:%#lx end:%#lx\n",
+ range.start, range.end);
+
+ ret = do_fault_and_migrate(dmirror, &range);
+ if (ret)
+ goto out_mmput;
+ }
+
+ /*
+ * Return the migrated data for verification.
+ * Only for pages in device zone
+ */
+ ret = dmirror_bounce_init(&bounce, start, size);
+ if (ret)
+ goto out_mmput;
+
+ mutex_lock(&dmirror->mutex);
+ ret = dmirror_do_read(dmirror, start, end, &bounce);
+ mutex_unlock(&dmirror->mutex);
+ if (ret == 0) {
+ ret = copy_to_user(u64_to_user_ptr(cmd->ptr), bounce.ptr, bounce.size);
+ if (ret)
+ ret = -EFAULT;
+ }
+
+ cmd->cpages = bounce.cpages;
+ dmirror_bounce_fini(&bounce);
+
+
+out_mmput:
+ mmput(dmirror->notifier.mm);
+out:
+ return ret;
+}
+
static void dmirror_mkentry(struct dmirror *dmirror, struct hmm_range *range,
unsigned char *perm, unsigned long entry)
{
@@ -1512,6 +1607,10 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp,
ret = dmirror_migrate_to_device(dmirror, &cmd);
break;
+ case HMM_DMIRROR_MIGRATE_ON_FAULT_TO_DEV:
+ ret = dmirror_fault_and_migrate_to_device(dmirror, &cmd);
+ break;
+
case HMM_DMIRROR_MIGRATE_TO_SYS:
ret = dmirror_migrate_to_system(dmirror, &cmd);
break;
diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h
index f94c6d457338..0b6e7a419e36 100644
--- a/lib/test_hmm_uapi.h
+++ b/lib/test_hmm_uapi.h
@@ -29,15 +29,16 @@ struct hmm_dmirror_cmd {
};
/* Expose the address space of the calling process through hmm device file */
-#define HMM_DMIRROR_READ _IOWR('H', 0x00, struct hmm_dmirror_cmd)
-#define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd)
-#define HMM_DMIRROR_MIGRATE_TO_DEV _IOWR('H', 0x02, struct hmm_dmirror_cmd)
-#define HMM_DMIRROR_MIGRATE_TO_SYS _IOWR('H', 0x03, struct hmm_dmirror_cmd)
-#define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x04, struct hmm_dmirror_cmd)
-#define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x05, struct hmm_dmirror_cmd)
-#define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x06, struct hmm_dmirror_cmd)
-#define HMM_DMIRROR_RELEASE _IOWR('H', 0x07, struct hmm_dmirror_cmd)
-#define HMM_DMIRROR_FLAGS _IOWR('H', 0x08, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_READ _IOWR('H', 0x00, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_MIGRATE_TO_DEV _IOWR('H', 0x02, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_MIGRATE_ON_FAULT_TO_DEV _IOWR('H', 0x03, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_MIGRATE_TO_SYS _IOWR('H', 0x04, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x05, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x06, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x07, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_RELEASE _IOWR('H', 0x08, struct hmm_dmirror_cmd)
+#define HMM_DMIRROR_FLAGS _IOWR('H', 0x09, struct hmm_dmirror_cmd)
#define HMM_DMIRROR_FLAG_FAIL_ALLOC (1ULL << 0)
diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftests/mm/hmm-tests.c
index e8328c89d855..c75616875c9e 100644
--- a/tools/testing/selftests/mm/hmm-tests.c
+++ b/tools/testing/selftests/mm/hmm-tests.c
@@ -277,6 +277,13 @@ static int hmm_migrate_sys_to_dev(int fd,
return hmm_dmirror_cmd(fd, HMM_DMIRROR_MIGRATE_TO_DEV, buffer, npages);
}
+static int hmm_migrate_on_fault_sys_to_dev(int fd,
+ struct hmm_buffer *buffer,
+ unsigned long npages)
+{
+ return hmm_dmirror_cmd(fd, HMM_DMIRROR_MIGRATE_ON_FAULT_TO_DEV, buffer, npages);
+}
+
static int hmm_migrate_dev_to_sys(int fd,
struct hmm_buffer *buffer,
unsigned long npages)
@@ -1034,6 +1041,53 @@ TEST_F(hmm, migrate)
hmm_buffer_free(buffer);
}
+
+/*
+ * Fault and migrate anonymous memory to device private memory.
+ */
+TEST_F(hmm, migrate_on_fault)
+{
+ struct hmm_buffer *buffer;
+ unsigned long npages;
+ unsigned long size;
+ unsigned long i;
+ int *ptr;
+ int ret;
+
+ npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift;
+ ASSERT_NE(npages, 0);
+ size = npages << self->page_shift;
+
+ buffer = malloc(sizeof(*buffer));
+ ASSERT_NE(buffer, NULL);
+
+ buffer->fd = -1;
+ buffer->size = size;
+ buffer->mirror = malloc(size);
+ ASSERT_NE(buffer->mirror, NULL);
+
+ buffer->ptr = mmap(NULL, size,
+ PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANONYMOUS,
+ buffer->fd, 0);
+ ASSERT_NE(buffer->ptr, MAP_FAILED);
+
+ /* Initialize buffer in system memory. */
+ for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i)
+ ptr[i] = i;
+
+ /* Fault and migrate memory to device. */
+ ret = hmm_migrate_on_fault_sys_to_dev(self->fd, buffer, npages);
+ ASSERT_EQ(ret, 0);
+ ASSERT_EQ(buffer->cpages, npages);
+
+ /* Check what the device read. */
+ for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+ ASSERT_EQ(ptr[i], i);
+
+ hmm_buffer_free(buffer);
+}
+
/*
* Migrate anonymous memory to device private memory and fault some of it back
* to system memory, then try migrating the resulting mix of system and device
--
2.50.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v7 6/6] mm:/migrate_device.c: remove migrate_vma_collect_*()
2026-03-30 4:30 [PATCH v7 0/6] Migrate on fault for device pages mpenttil
` (4 preceding siblings ...)
2026-03-30 4:30 ` [PATCH v7 5/6] mm: add new testcase for the migrate on fault case mpenttil
@ 2026-03-30 4:30 ` mpenttil
2026-03-30 6:22 ` Christoph Hellwig
5 siblings, 1 reply; 15+ messages in thread
From: mpenttil @ 2026-03-30 4:30 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mika Penttilä, David Hildenbrand,
Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh,
Zi Yan, Matthew Brost
From: Mika Penttilä <mpenttil@redhat.com>
With the unified fault handling and migrate path,
the migrate_vma_collect_*() functions are unused,
let's remove them.
Cc: David Hildenbrand <david@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leonro@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Balbir Singh <balbirs@nvidia.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Mika Penttilä <mpenttil@redhat.com>
---
mm/migrate_device.c | 508 --------------------------------------------
1 file changed, 508 deletions(-)
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 7ca5dc80d39b..9098b64aeb2c 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -18,514 +18,6 @@
#include <asm/tlbflush.h>
#include "internal.h"
-static int migrate_vma_collect_skip(unsigned long start,
- unsigned long end,
- struct mm_walk *walk)
-{
- struct migrate_vma *migrate = walk->private;
- unsigned long addr;
-
- for (addr = start; addr < end; addr += PAGE_SIZE) {
- migrate->dst[migrate->npages] = 0;
- migrate->src[migrate->npages++] = 0;
- }
-
- return 0;
-}
-
-static int migrate_vma_collect_hole(unsigned long start,
- unsigned long end,
- __always_unused int depth,
- struct mm_walk *walk)
-{
- struct migrate_vma *migrate = walk->private;
- unsigned long addr;
-
- /* Only allow populating anonymous memory. */
- if (!vma_is_anonymous(walk->vma))
- return migrate_vma_collect_skip(start, end, walk);
-
- if (thp_migration_supported() &&
- (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) &&
- (IS_ALIGNED(start, HPAGE_PMD_SIZE) &&
- IS_ALIGNED(end, HPAGE_PMD_SIZE))) {
- migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE |
- MIGRATE_PFN_COMPOUND;
- migrate->dst[migrate->npages] = 0;
- migrate->npages++;
- migrate->cpages++;
-
- /*
- * Collect the remaining entries as holes, in case we
- * need to split later
- */
- return migrate_vma_collect_skip(start + PAGE_SIZE, end, walk);
- }
-
- for (addr = start; addr < end; addr += PAGE_SIZE) {
- migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
- migrate->dst[migrate->npages] = 0;
- migrate->npages++;
- migrate->cpages++;
- }
-
- return 0;
-}
-
-/**
- * migrate_vma_split_folio() - Helper function to split a THP folio
- * @folio: the folio to split
- * @fault_page: struct page associated with the fault if any
- *
- * Returns 0 on success
- */
-static int migrate_vma_split_folio(struct folio *folio,
- struct page *fault_page)
-{
- int ret;
- struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL;
- struct folio *new_fault_folio = NULL;
-
- if (folio != fault_folio) {
- folio_get(folio);
- folio_lock(folio);
- }
-
- ret = split_folio(folio);
- if (ret) {
- if (folio != fault_folio) {
- folio_unlock(folio);
- folio_put(folio);
- }
- return ret;
- }
-
- new_fault_folio = fault_page ? page_folio(fault_page) : NULL;
-
- /*
- * Ensure the lock is held on the correct
- * folio after the split
- */
- if (!new_fault_folio) {
- folio_unlock(folio);
- folio_put(folio);
- } else if (folio != new_fault_folio) {
- if (new_fault_folio != fault_folio) {
- folio_get(new_fault_folio);
- folio_lock(new_fault_folio);
- }
- folio_unlock(folio);
- folio_put(folio);
- }
-
- return 0;
-}
-
-/** migrate_vma_collect_huge_pmd - collect THP pages without splitting the
- * folio for device private pages.
- * @pmdp: pointer to pmd entry
- * @start: start address of the range for migration
- * @end: end address of the range for migration
- * @walk: mm_walk callback structure
- * @fault_folio: folio associated with the fault if any
- *
- * Collect the huge pmd entry at @pmdp for migration and set the
- * MIGRATE_PFN_COMPOUND flag in the migrate src entry to indicate that
- * migration will occur at HPAGE_PMD granularity
- */
-static int migrate_vma_collect_huge_pmd(pmd_t *pmdp, unsigned long start,
- unsigned long end, struct mm_walk *walk,
- struct folio *fault_folio)
-{
- struct mm_struct *mm = walk->mm;
- struct folio *folio;
- struct migrate_vma *migrate = walk->private;
- spinlock_t *ptl;
- int ret;
- unsigned long write = 0;
-
- ptl = pmd_lock(mm, pmdp);
- if (pmd_none(*pmdp)) {
- spin_unlock(ptl);
- return migrate_vma_collect_hole(start, end, -1, walk);
- }
-
- if (pmd_trans_huge(*pmdp)) {
- if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) {
- spin_unlock(ptl);
- return migrate_vma_collect_skip(start, end, walk);
- }
-
- folio = pmd_folio(*pmdp);
- if (is_huge_zero_folio(folio)) {
- spin_unlock(ptl);
- return migrate_vma_collect_hole(start, end, -1, walk);
- }
- if (pmd_write(*pmdp))
- write = MIGRATE_PFN_WRITE;
- } else if (!pmd_present(*pmdp)) {
- const softleaf_t entry = softleaf_from_pmd(*pmdp);
-
- folio = softleaf_to_folio(entry);
-
- if (!softleaf_is_device_private(entry) ||
- !(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) ||
- (folio->pgmap->owner != migrate->pgmap_owner)) {
- spin_unlock(ptl);
- return migrate_vma_collect_skip(start, end, walk);
- }
-
- if (softleaf_is_migration(entry)) {
- softleaf_entry_wait_on_locked(entry, ptl);
- spin_unlock(ptl);
- return -EAGAIN;
- }
-
- if (softleaf_is_device_private_write(entry))
- write = MIGRATE_PFN_WRITE;
- } else {
- spin_unlock(ptl);
- return -EAGAIN;
- }
-
- folio_get(folio);
- if (folio != fault_folio && unlikely(!folio_trylock(folio))) {
- spin_unlock(ptl);
- folio_put(folio);
- return migrate_vma_collect_skip(start, end, walk);
- }
-
- if (thp_migration_supported() &&
- (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) &&
- (IS_ALIGNED(start, HPAGE_PMD_SIZE) &&
- IS_ALIGNED(end, HPAGE_PMD_SIZE))) {
-
- struct page_vma_mapped_walk pvmw = {
- .ptl = ptl,
- .address = start,
- .pmd = pmdp,
- .vma = walk->vma,
- };
-
- unsigned long pfn = page_to_pfn(folio_page(folio, 0));
-
- migrate->src[migrate->npages] = migrate_pfn(pfn) | write
- | MIGRATE_PFN_MIGRATE
- | MIGRATE_PFN_COMPOUND;
- migrate->dst[migrate->npages++] = 0;
- migrate->cpages++;
- ret = set_pmd_migration_entry(&pvmw, folio_page(folio, 0));
- if (ret) {
- migrate->npages--;
- migrate->cpages--;
- migrate->src[migrate->npages] = 0;
- migrate->dst[migrate->npages] = 0;
- goto fallback;
- }
- migrate_vma_collect_skip(start + PAGE_SIZE, end, walk);
- spin_unlock(ptl);
- return 0;
- }
-
-fallback:
- spin_unlock(ptl);
- if (!folio_test_large(folio))
- goto done;
- ret = split_folio(folio);
- if (fault_folio != folio)
- folio_unlock(folio);
- folio_put(folio);
- if (ret)
- return migrate_vma_collect_skip(start, end, walk);
- if (pmd_none(pmdp_get_lockless(pmdp)))
- return migrate_vma_collect_hole(start, end, -1, walk);
-
-done:
- return -ENOENT;
-}
-
-static int migrate_vma_collect_pmd(pmd_t *pmdp,
- unsigned long start,
- unsigned long end,
- struct mm_walk *walk)
-{
- struct migrate_vma *migrate = walk->private;
- struct vm_area_struct *vma = walk->vma;
- struct mm_struct *mm = vma->vm_mm;
- unsigned long addr = start, unmapped = 0;
- spinlock_t *ptl;
- struct folio *fault_folio = migrate->fault_page ?
- page_folio(migrate->fault_page) : NULL;
- pte_t *ptep;
-
-again:
- if (pmd_trans_huge(*pmdp) || !pmd_present(*pmdp)) {
- int ret = migrate_vma_collect_huge_pmd(pmdp, start, end, walk, fault_folio);
-
- if (ret == -EAGAIN)
- goto again;
- if (ret == 0)
- return 0;
- }
-
- ptep = pte_offset_map_lock(mm, pmdp, start, &ptl);
- if (!ptep)
- goto again;
- lazy_mmu_mode_enable();
- ptep += (addr - start) / PAGE_SIZE;
-
- for (; addr < end; addr += PAGE_SIZE, ptep++) {
- struct dev_pagemap *pgmap;
- unsigned long mpfn = 0, pfn;
- struct folio *folio;
- struct page *page;
- softleaf_t entry;
- pte_t pte;
-
- pte = ptep_get(ptep);
-
- if (pte_none(pte)) {
- if (vma_is_anonymous(vma)) {
- mpfn = MIGRATE_PFN_MIGRATE;
- migrate->cpages++;
- }
- goto next;
- }
-
- if (!pte_present(pte)) {
- /*
- * Only care about unaddressable device page special
- * page table entry. Other special swap entries are not
- * migratable, and we ignore regular swapped page.
- */
- entry = softleaf_from_pte(pte);
- if (!softleaf_is_device_private(entry))
- goto next;
-
- page = softleaf_to_page(entry);
- pgmap = page_pgmap(page);
- if (!(migrate->flags &
- MIGRATE_VMA_SELECT_DEVICE_PRIVATE) ||
- pgmap->owner != migrate->pgmap_owner)
- goto next;
-
- folio = page_folio(page);
- if (folio_test_large(folio)) {
- int ret;
-
- lazy_mmu_mode_disable();
- pte_unmap_unlock(ptep, ptl);
- ret = migrate_vma_split_folio(folio,
- migrate->fault_page);
-
- if (ret) {
- if (unmapped)
- flush_tlb_range(walk->vma, start, end);
-
- return migrate_vma_collect_skip(addr, end, walk);
- }
-
- goto again;
- }
-
- mpfn = migrate_pfn(page_to_pfn(page)) |
- MIGRATE_PFN_MIGRATE;
- if (softleaf_is_device_private_write(entry))
- mpfn |= MIGRATE_PFN_WRITE;
- } else {
- pfn = pte_pfn(pte);
- if (is_zero_pfn(pfn) &&
- (migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) {
- mpfn = MIGRATE_PFN_MIGRATE;
- migrate->cpages++;
- goto next;
- }
- page = vm_normal_page(migrate->vma, addr, pte);
- if (page && !is_zone_device_page(page) &&
- !(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) {
- goto next;
- } else if (page && is_device_coherent_page(page)) {
- pgmap = page_pgmap(page);
-
- if (!(migrate->flags &
- MIGRATE_VMA_SELECT_DEVICE_COHERENT) ||
- pgmap->owner != migrate->pgmap_owner)
- goto next;
- }
- folio = page ? page_folio(page) : NULL;
- if (folio && folio_test_large(folio)) {
- int ret;
-
- lazy_mmu_mode_disable();
- pte_unmap_unlock(ptep, ptl);
- ret = migrate_vma_split_folio(folio,
- migrate->fault_page);
-
- if (ret) {
- if (unmapped)
- flush_tlb_range(walk->vma, start, end);
-
- return migrate_vma_collect_skip(addr, end, walk);
- }
-
- goto again;
- }
- mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE;
- mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0;
- }
-
- if (!page || !page->mapping) {
- mpfn = 0;
- goto next;
- }
-
- /*
- * By getting a reference on the folio we pin it and that blocks
- * any kind of migration. Side effect is that it "freezes" the
- * pte.
- *
- * We drop this reference after isolating the folio from the lru
- * for non device folio (device folio are not on the lru and thus
- * can't be dropped from it).
- */
- folio = page_folio(page);
- folio_get(folio);
-
- /*
- * We rely on folio_trylock() to avoid deadlock between
- * concurrent migrations where each is waiting on the others
- * folio lock. If we can't immediately lock the folio we fail this
- * migration as it is only best effort anyway.
- *
- * If we can lock the folio it's safe to set up a migration entry
- * now. In the common case where the folio is mapped once in a
- * single process setting up the migration entry now is an
- * optimisation to avoid walking the rmap later with
- * try_to_migrate().
- */
- if (fault_folio == folio || folio_trylock(folio)) {
- bool anon_exclusive;
- pte_t swp_pte;
-
- flush_cache_page(vma, addr, pte_pfn(pte));
- anon_exclusive = folio_test_anon(folio) &&
- PageAnonExclusive(page);
- if (anon_exclusive) {
- pte = ptep_clear_flush(vma, addr, ptep);
-
- if (folio_try_share_anon_rmap_pte(folio, page)) {
- set_pte_at(mm, addr, ptep, pte);
- if (fault_folio != folio)
- folio_unlock(folio);
- folio_put(folio);
- mpfn = 0;
- goto next;
- }
- } else {
- pte = ptep_get_and_clear(mm, addr, ptep);
- }
-
- migrate->cpages++;
-
- /* Set the dirty flag on the folio now the pte is gone. */
- if (pte_dirty(pte))
- folio_mark_dirty(folio);
-
- /* Setup special migration page table entry */
- if (mpfn & MIGRATE_PFN_WRITE)
- entry = make_writable_migration_entry(
- page_to_pfn(page));
- else if (anon_exclusive)
- entry = make_readable_exclusive_migration_entry(
- page_to_pfn(page));
- else
- entry = make_readable_migration_entry(
- page_to_pfn(page));
- if (pte_present(pte)) {
- if (pte_young(pte))
- entry = make_migration_entry_young(entry);
- if (pte_dirty(pte))
- entry = make_migration_entry_dirty(entry);
- }
- swp_pte = swp_entry_to_pte(entry);
- if (pte_present(pte)) {
- if (pte_soft_dirty(pte))
- swp_pte = pte_swp_mksoft_dirty(swp_pte);
- if (pte_uffd_wp(pte))
- swp_pte = pte_swp_mkuffd_wp(swp_pte);
- } else {
- if (pte_swp_soft_dirty(pte))
- swp_pte = pte_swp_mksoft_dirty(swp_pte);
- if (pte_swp_uffd_wp(pte))
- swp_pte = pte_swp_mkuffd_wp(swp_pte);
- }
- set_pte_at(mm, addr, ptep, swp_pte);
-
- /*
- * This is like regular unmap: we remove the rmap and
- * drop the folio refcount. The folio won't be freed, as
- * we took a reference just above.
- */
- folio_remove_rmap_pte(folio, page, vma);
- folio_put(folio);
-
- if (pte_present(pte))
- unmapped++;
- } else {
- folio_put(folio);
- mpfn = 0;
- }
-
-next:
- migrate->dst[migrate->npages] = 0;
- migrate->src[migrate->npages++] = mpfn;
- }
-
- /* Only flush the TLB if we actually modified any entries */
- if (unmapped)
- flush_tlb_range(walk->vma, start, end);
-
- lazy_mmu_mode_disable();
- pte_unmap_unlock(ptep - 1, ptl);
-
- return 0;
-}
-
-static const struct mm_walk_ops migrate_vma_walk_ops = {
- .pmd_entry = migrate_vma_collect_pmd,
- .pte_hole = migrate_vma_collect_hole,
- .walk_lock = PGWALK_RDLOCK,
-};
-
-/*
- * migrate_vma_collect() - collect pages over a range of virtual addresses
- * @migrate: migrate struct containing all migration information
- *
- * This will walk the CPU page table. For each virtual address backed by a
- * valid page, it updates the src array and takes a reference on the page, in
- * order to pin the page until we lock it and unmap it.
- */
-static void migrate_vma_collect(struct migrate_vma *migrate)
-{
- struct mmu_notifier_range range;
-
- /*
- * Note that the pgmap_owner is passed to the mmu notifier callback so
- * that the registered device driver can skip invalidating device
- * private page mappings that won't be migrated.
- */
- mmu_notifier_range_init_owner(&range, MMU_NOTIFY_MIGRATE, 0,
- migrate->vma->vm_mm, migrate->start, migrate->end,
- migrate->pgmap_owner);
- mmu_notifier_invalidate_range_start(&range);
-
- walk_page_range(migrate->vma->vm_mm, migrate->start, migrate->end,
- &migrate_vma_walk_ops, migrate);
-
- mmu_notifier_invalidate_range_end(&range);
- migrate->end = migrate->start + (migrate->npages << PAGE_SHIFT);
-}
-
/*
* migrate_vma_check_page() - check if page is pinned or not
* @page: struct page to check
--
2.50.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v7 1/6] mm:/Kconfig changes for migrate on fault for device pages
2026-03-30 4:30 ` [PATCH v7 1/6] mm:/Kconfig changes for migrate " mpenttil
@ 2026-03-30 6:20 ` Christoph Hellwig
0 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2026-03-30 6:20 UTC (permalink / raw)
To: mpenttil
Cc: linux-mm, linux-kernel, Andrew Morton, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko
On Mon, Mar 30, 2026 at 07:30:12AM +0300, mpenttil@redhat.com wrote:
> From: Mika Penttilä <mpenttil@redhat.com>
>
> With the unified HMM/migrate_device page table walk
> migrate_device needs HMM enabled and HMM needs
> MMU notifiers. Enable them explicitly to avoid
> breaking random configs.
You can use a lot more space in your commit logs, ending the lines so
early reads a bit weird.
> diff --git a/mm/Kconfig b/mm/Kconfig
> index ebd8ea353687..583d92bba2e8 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -647,6 +647,7 @@ config MIGRATION
>
> config DEVICE_MIGRATION
> def_bool MIGRATION && ZONE_DEVICE
> + select HMM_MIRROR
>
> config ARCH_ENABLE_HUGEPAGE_MIGRATION
> bool
> @@ -1222,6 +1223,7 @@ config ZONE_DEVICE
> config HMM_MIRROR
> bool
> depends on MMU
> + select MMU_NOTIFIER
But either way this really should go into the patch that actually adds
the code dependency anyway.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 5/6] mm: add new testcase for the migrate on fault case
2026-03-30 4:30 ` [PATCH v7 5/6] mm: add new testcase for the migrate on fault case mpenttil
@ 2026-03-30 6:21 ` Christoph Hellwig
2026-03-30 6:40 ` Mika Penttilä
0 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2026-03-30 6:21 UTC (permalink / raw)
To: mpenttil
Cc: linux-mm, linux-kernel, David Hildenbrand, Jason Gunthorpe,
Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan,
Matthew Brost, Marco Pagani
On Mon, Mar 30, 2026 at 07:30:16AM +0300, mpenttil@redhat.com wrote:
> From: Mika Penttilä <mpenttil@redhat.com>
This seems to lack a commit message.
Also where is the real user of your changes? It seems like only this
test case consumes it currently and you actually add dead code?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 6/6] mm:/migrate_device.c: remove migrate_vma_collect_*()
2026-03-30 4:30 ` [PATCH v7 6/6] mm:/migrate_device.c: remove migrate_vma_collect_*() mpenttil
@ 2026-03-30 6:22 ` Christoph Hellwig
2026-03-30 6:47 ` Mika Penttilä
0 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2026-03-30 6:22 UTC (permalink / raw)
To: mpenttil
Cc: linux-mm, linux-kernel, David Hildenbrand, Jason Gunthorpe,
Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan,
Matthew Brost
On Mon, Mar 30, 2026 at 07:30:17AM +0300, mpenttil@redhat.com wrote:
> From: Mika Penttilä <mpenttil@redhat.com>
>
> With the unified fault handling and migrate path,
> the migrate_vma_collect_*() functions are unused,
> let's remove them.
This should go into the patch removing the uses.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 5/6] mm: add new testcase for the migrate on fault case
2026-03-30 6:21 ` Christoph Hellwig
@ 2026-03-30 6:40 ` Mika Penttilä
0 siblings, 0 replies; 15+ messages in thread
From: Mika Penttilä @ 2026-03-30 6:40 UTC (permalink / raw)
To: Christoph Hellwig
Cc: linux-mm, linux-kernel, David Hildenbrand, Jason Gunthorpe,
Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan,
Matthew Brost, Marco Pagani
Hi,
On 3/30/26 09:21, Christoph Hellwig wrote:
> On Mon, Mar 30, 2026 at 07:30:16AM +0300, mpenttil@redhat.com wrote:
>> From: Mika Penttilä <mpenttil@redhat.com>
> This seems to lack a commit message.
>
> Also where is the real user of your changes? It seems like only this
> test case consumes it currently and you actually add dead code?
>
The changes add the possibility to do migration initiated by hmm_range_fault()
while handling faults. There are some advantages to it, like more efficient
flow, and you get the VMA populated (needed for migration).
Also, the current migrate_vma_setup() initiated flow is changed to use the new code,
so it's not dead code.
Thanks,
Mika
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 6/6] mm:/migrate_device.c: remove migrate_vma_collect_*()
2026-03-30 6:22 ` Christoph Hellwig
@ 2026-03-30 6:47 ` Mika Penttilä
0 siblings, 0 replies; 15+ messages in thread
From: Mika Penttilä @ 2026-03-30 6:47 UTC (permalink / raw)
To: Christoph Hellwig
Cc: linux-mm, linux-kernel, David Hildenbrand, Jason Gunthorpe,
Leon Romanovsky, Alistair Popple, Balbir Singh, Zi Yan,
Matthew Brost
Hi,
On 3/30/26 09:22, Christoph Hellwig wrote:
> On Mon, Mar 30, 2026 at 07:30:17AM +0300, mpenttil@redhat.com wrote:
>> From: Mika Penttilä <mpenttil@redhat.com>
>>
>> With the unified fault handling and migrate path,
>> the migrate_vma_collect_*() functions are unused,
>> let's remove them.
> This should go into the patch removing the uses.
>
Patch 4 makes the old paths not used anymore, but it is already quite big,
so though it would be better to have the remove on its own.
Thanks,
Mika
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 3/6] mm/hmm: do the plumbing for HMM to participate in migration
2026-03-30 4:30 ` [PATCH v7 3/6] mm/hmm: do the plumbing for HMM to participate in migration mpenttil
@ 2026-03-30 11:05 ` kernel test robot
2026-03-30 11:05 ` kernel test robot
1 sibling, 0 replies; 15+ messages in thread
From: kernel test robot @ 2026-03-30 11:05 UTC (permalink / raw)
To: mpenttil, linux-mm
Cc: llvm, oe-kbuild-all, linux-kernel, Mika Penttilä,
David Hildenbrand, Jason Gunthorpe, Leon Romanovsky,
Alistair Popple, Balbir Singh, Zi Yan, Matthew Brost
Hi,
kernel test robot noticed the following build errors:
[auto build test ERROR on 7aaa8047eafd0bd628065b15757d9b48c5f9c07d]
url: https://github.com/intel-lab-lkp/linux/commits/mpenttil-redhat-com/mm-Kconfig-changes-for-migrate-on-fault-for-device-pages/20260330-124915
base: 7aaa8047eafd0bd628065b15757d9b48c5f9c07d
patch link: https://lore.kernel.org/r/20260330043017.251808-4-mpenttil%40redhat.com
patch subject: [PATCH v7 3/6] mm/hmm: do the plumbing for HMM to participate in migration
config: hexagon-allnoconfig (https://download.01.org/0day-ci/archive/20260330/202603301832.rpYcya7E-lkp@intel.com/config)
compiler: clang version 23.0.0git (https://github.com/llvm/llvm-project 2cd67b8b69f78e3f95918204320c3075a74ba16c)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260330/202603301832.rpYcya7E-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603301832.rpYcya7E-lkp@intel.com/
All errors (new ones prefixed by >>):
In file included from fs/aio.c:40:
>> include/linux/migrate.h:114:1: error: function definition is not allowed here
114 | {
| ^
include/linux/migrate.h:127:1: error: function definition is not allowed here
127 | {
| ^
include/linux/migrate.h:131:1: error: function definition is not allowed here
131 | {
| ^
In file included from fs/aio.c:41:
In file included from include/linux/ramfs.h:5:
In file included from include/linux/fs_parser.h:11:
>> include/linux/fs_context.h:141:1: error: function definition is not allowed here
141 | {
| ^
In file included from fs/aio.c:41:
In file included from include/linux/ramfs.h:5:
>> include/linux/fs_parser.h:75:1: error: function definition is not allowed here
75 | {
| ^
include/linux/fs_parser.h:95:1: error: function definition is not allowed here
95 | { return true; }
| ^
In file included from fs/aio.c:41:
>> include/linux/ramfs.h:15:1: error: function definition is not allowed here
15 | {
| ^
In file included from fs/aio.c:49:
>> fs/internal.h:113:1: error: function definition is not allowed here
113 | {
| ^
fs/internal.h:121:1: error: function definition is not allowed here
121 | {
| ^
fs/internal.h:150:1: error: function definition is not allowed here
150 | {
| ^
fs/internal.h:170:1: error: function definition is not allowed here
170 | {
| ^
>> fs/internal.h:214:6: error: conflicting types for 'in_group_or_capable'
214 | bool in_group_or_capable(struct mnt_idmap *idmap,
| ^
include/linux/fs.h:1842:6: note: previous declaration is here
1842 | bool in_group_or_capable(struct mnt_idmap *idmap,
| ^
In file included from fs/aio.c:49:
fs/internal.h:313:1: error: function definition is not allowed here
313 | {
| ^
fs/internal.h:319:1: error: function definition is not allowed here
319 | {
| ^
>> fs/internal.h:330:19: error: conflicting types for 'mnt_idmap_get'
330 | struct mnt_idmap *mnt_idmap_get(struct mnt_idmap *idmap);
| ^
include/linux/mnt_idmapping.h:124:19: note: previous declaration is here
124 | struct mnt_idmap *mnt_idmap_get(struct mnt_idmap *idmap);
| ^
In file included from fs/aio.c:49:
>> fs/internal.h:331:6: error: conflicting types for 'mnt_idmap_put'
331 | void mnt_idmap_put(struct mnt_idmap *idmap);
| ^
include/linux/mnt_idmapping.h:125:6: note: previous declaration is here
125 | void mnt_idmap_put(struct mnt_idmap *idmap);
| ^
In file included from fs/aio.c:49:
fs/internal.h:352:1: error: function definition is not allowed here
352 | {
| ^
>> fs/aio.c:245:1: error: function definition is not allowed here
245 | {
| ^
>> fs/aio.c:257:37: error: variable has incomplete type 'const struct file_operations'
257 | static const struct file_operations aio_ring_fops;
| ^
include/linux/fs_context.h:19:8: note: forward declaration of 'struct file_operations'
19 | struct file_operations;
| ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
--
In file included from mm/filemap.c:45:
>> include/linux/migrate.h:114:1: error: function definition is not allowed here
114 | {
| ^
include/linux/migrate.h:127:1: error: function definition is not allowed here
127 | {
| ^
include/linux/migrate.h:131:1: error: function definition is not allowed here
131 | {
| ^
In file included from mm/filemap.c:46:
>> include/linux/pipe_fs_i.h:162:1: error: function definition is not allowed here
162 | {
| ^
include/linux/pipe_fs_i.h:176:1: error: function definition is not allowed here
176 | {
| ^
include/linux/pipe_fs_i.h:186:1: error: function definition is not allowed here
186 | {
| ^
include/linux/pipe_fs_i.h:198:1: error: function definition is not allowed here
198 | {
| ^
include/linux/pipe_fs_i.h:207:1: error: function definition is not allowed here
207 | {
| ^
include/linux/pipe_fs_i.h:216:1: error: function definition is not allowed here
216 | {
| ^
include/linux/pipe_fs_i.h:225:1: error: function definition is not allowed here
225 | {
| ^
include/linux/pipe_fs_i.h:236:1: error: function definition is not allowed here
236 | {
| ^
include/linux/pipe_fs_i.h:245:1: error: function definition is not allowed here
245 | {
| ^
include/linux/pipe_fs_i.h:258:1: error: function definition is not allowed here
258 | {
| ^
include/linux/pipe_fs_i.h:269:1: error: function definition is not allowed here
269 | {
| ^
include/linux/pipe_fs_i.h:283:1: error: function definition is not allowed here
283 | {
| ^
include/linux/pipe_fs_i.h:296:1: error: function definition is not allowed here
296 | {
| ^
In file included from mm/filemap.c:47:
>> include/linux/splice.h:94:1: error: function definition is not allowed here
94 | {
| ^
In file included from mm/filemap.c:48:
>> include/linux/rcupdate_wait.h:63:1: error: function definition is not allowed here
63 | {
| ^
include/linux/rcupdate_wait.h:74:1: error: function definition is not allowed here
74 | {
| ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
--
In file included from mm/folio-compat.c:8:
>> include/linux/migrate.h:114:1: error: function definition is not allowed here
114 | {
| ^
include/linux/migrate.h:127:1: error: function definition is not allowed here
127 | {
| ^
include/linux/migrate.h:131:1: error: function definition is not allowed here
131 | {
| ^
In file included from mm/folio-compat.c:10:
In file included from include/linux/rmap.h:12:
In file included from include/linux/memcontrol.h:18:
>> include/linux/page_counter.h:58:1: error: function definition is not allowed here
58 | {
| ^
include/linux/page_counter.h:67:1: error: function definition is not allowed here
67 | {
| ^
include/linux/page_counter.h:82:1: error: function definition is not allowed here
82 | {
| ^
include/linux/page_counter.h:91:1: error: function definition is not allowed here
91 | {
| ^
include/linux/page_counter.h:109:39: error: function definition is not allowed here
109 | bool recursive_protection) {}
| ^
In file included from mm/folio-compat.c:10:
In file included from include/linux/rmap.h:12:
In file included from include/linux/memcontrol.h:19:
In file included from include/linux/vmpressure.h:11:
>> include/linux/eventfd.h:44:1: error: function definition is not allowed here
44 | {
| ^
include/linux/eventfd.h:88:1: error: function definition is not allowed here
88 | {
| ^
In file included from mm/folio-compat.c:10:
In file included from include/linux/rmap.h:12:
In file included from include/linux/memcontrol.h:19:
>> include/linux/vmpressure.h:48:58: error: function definition is not allowed here
48 | unsigned long scanned, unsigned long reclaimed) {}
| ^
include/linux/vmpressure.h:50:18: error: function definition is not allowed here
50 | int prio) {}
| ^
In file included from mm/folio-compat.c:10:
In file included from include/linux/rmap.h:12:
In file included from include/linux/memcontrol.h:23:
In file included from include/linux/writeback.h:11:
>> include/linux/flex_proportions.h:64:1: error: function definition is not allowed here
64 | {
| ^
In file included from mm/folio-compat.c:10:
In file included from include/linux/rmap.h:12:
In file included from include/linux/memcontrol.h:23:
In file included from include/linux/writeback.h:12:
>> include/linux/backing-dev-defs.h:281:1: error: function definition is not allowed here
281 | {
| ^
include/linux/backing-dev-defs.h:286:1: error: function definition is not allowed here
286 | {
| ^
include/linux/backing-dev-defs.h:290:1: error: function definition is not allowed here
290 | {
| ^
include/linux/backing-dev-defs.h:294:1: error: function definition is not allowed here
294 | {
| ^
include/linux/backing-dev-defs.h:298:1: error: function definition is not allowed here
298 | {
| ^
In file included from mm/folio-compat.c:10:
In file included from include/linux/rmap.h:12:
In file included from include/linux/memcontrol.h:23:
In file included from include/linux/writeback.h:13:
In file included from include/linux/blk_types.h:10:
>> include/linux/bvec.h:43:1: error: function definition is not allowed here
43 | {
| ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
--
In file included from mm/vmscan.c:42:
>> include/linux/migrate.h:114:1: error: function definition is not allowed here
114 | {
| ^
include/linux/migrate.h:127:1: error: function definition is not allowed here
127 | {
| ^
include/linux/migrate.h:131:1: error: function definition is not allowed here
131 | {
| ^
In file included from mm/vmscan.c:43:
>> include/linux/delayacct.h:255:1: error: function definition is not allowed here
255 | {}
| ^
include/linux/delayacct.h:257:1: error: function definition is not allowed here
257 | {}
| ^
include/linux/delayacct.h:259:1: error: function definition is not allowed here
259 | {}
| ^
include/linux/delayacct.h:261:1: error: function definition is not allowed here
261 | {}
| ^
include/linux/delayacct.h:263:1: error: function definition is not allowed here
263 | {}
| ^
include/linux/delayacct.h:266:1: error: function definition is not allowed here
266 | { return 0; }
| ^
include/linux/delayacct.h:268:1: error: function definition is not allowed here
268 | { return 0; }
| ^
include/linux/delayacct.h:270:1: error: function definition is not allowed here
270 | { return 0; }
| ^
include/linux/delayacct.h:272:1: error: function definition is not allowed here
272 | {}
| ^
include/linux/delayacct.h:274:1: error: function definition is not allowed here
274 | {}
| ^
include/linux/delayacct.h:276:1: error: function definition is not allowed here
276 | {}
| ^
include/linux/delayacct.h:278:1: error: function definition is not allowed here
278 | {}
| ^
include/linux/delayacct.h:280:1: error: function definition is not allowed here
280 | {}
| ^
include/linux/delayacct.h:282:1: error: function definition is not allowed here
282 | {}
| ^
include/linux/delayacct.h:284:1: error: function definition is not allowed here
284 | {}
| ^
include/linux/delayacct.h:286:1: error: function definition is not allowed here
286 | {}
| ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
--
In file included from mm/shmem.c:73:
>> include/linux/migrate.h:114:1: error: function definition is not allowed here
114 | {
| ^
include/linux/migrate.h:127:1: error: function definition is not allowed here
127 | {
| ^
include/linux/migrate.h:131:1: error: function definition is not allowed here
131 | {
| ^
In file included from mm/shmem.c:77:
In file included from include/linux/syscalls.h:89:
>> include/linux/sem.h:18:1: error: function definition is not allowed here
18 | {
| ^
include/linux/sem.h:23:1: error: function definition is not allowed here
23 | {
| ^
In file included from mm/shmem.c:77:
In file included from include/linux/syscalls.h:95:
In file included from include/trace/syscall.h:5:
>> include/linux/tracepoint.h:49:1: error: function definition is not allowed here
49 | {
| ^
include/linux/tracepoint.h:75:1: error: function definition is not allowed here
75 | {
| ^
include/linux/tracepoint.h:80:1: error: function definition is not allowed here
80 | {
| ^
include/linux/tracepoint.h:85:1: error: function definition is not allowed here
85 | {
| ^
include/linux/tracepoint.h:92:1: error: function definition is not allowed here
92 | {
| ^
include/linux/tracepoint.h:99:1: error: function definition is not allowed here
99 | {
| ^
include/linux/tracepoint.h:127:1: error: function definition is not allowed here
127 | { }
| ^
include/linux/tracepoint.h:129:1: error: function definition is not allowed here
129 | {
| ^
include/linux/tracepoint.h:159:1: error: function definition is not allowed here
159 | {
| ^
In file included from mm/shmem.c:77:
In file included from include/linux/syscalls.h:95:
In file included from include/trace/syscall.h:7:
In file included from include/linux/trace_events.h:6:
In file included from include/linux/ring_buffer.h:7:
>> include/linux/poll.h:43:1: error: function definition is not allowed here
43 | {
| ^
include/linux/poll.h:63:1: error: function definition is not allowed here
63 | {
| ^
include/linux/poll.h:68:1: error: function definition is not allowed here
68 | {
| ^
include/linux/poll.h:74:1: error: function definition is not allowed here
74 | {
| ^
include/linux/poll.h:79:1: error: function definition is not allowed here
79 | {
| ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
..
vim +114 include/linux/migrate.h
112
113 static inline enum migrate_vma_info hmm_select_migrate(struct hmm_range *range)
> 114 {
115 return MIGRATE_VMA_SELECT_NONE;
116 }
117
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v7 3/6] mm/hmm: do the plumbing for HMM to participate in migration
2026-03-30 4:30 ` [PATCH v7 3/6] mm/hmm: do the plumbing for HMM to participate in migration mpenttil
2026-03-30 11:05 ` kernel test robot
@ 2026-03-30 11:05 ` kernel test robot
1 sibling, 0 replies; 15+ messages in thread
From: kernel test robot @ 2026-03-30 11:05 UTC (permalink / raw)
To: mpenttil, linux-mm
Cc: oe-kbuild-all, linux-kernel, Mika Penttilä,
David Hildenbrand, Jason Gunthorpe, Leon Romanovsky,
Alistair Popple, Balbir Singh, Zi Yan, Matthew Brost
Hi,
kernel test robot noticed the following build errors:
[auto build test ERROR on 7aaa8047eafd0bd628065b15757d9b48c5f9c07d]
url: https://github.com/intel-lab-lkp/linux/commits/mpenttil-redhat-com/mm-Kconfig-changes-for-migrate-on-fault-for-device-pages/20260330-124915
base: 7aaa8047eafd0bd628065b15757d9b48c5f9c07d
patch link: https://lore.kernel.org/r/20260330043017.251808-4-mpenttil%40redhat.com
patch subject: [PATCH v7 3/6] mm/hmm: do the plumbing for HMM to participate in migration
config: powerpc-allnoconfig (https://download.01.org/0day-ci/archive/20260330/202603301844.FdEMshbo-lkp@intel.com/config)
compiler: powerpc-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260330/202603301844.FdEMshbo-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603301844.FdEMshbo-lkp@intel.com/
All error/warnings (new ones prefixed by >>):
fs/aio.c:702:21: error: assignment to 'struct kioctx_table *' from incompatible pointer type 'struct kioctx_table *' [-Wincompatible-pointer-types]
702 | old = rcu_dereference_raw(mm->ioctx_table);
| ^
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
fs/aio.c:719:13: error: invalid storage class for function 'aio_nr_sub'
719 | static void aio_nr_sub(unsigned nr)
| ^~~~~~~~~~
fs/aio.c:732:23: error: invalid storage class for function 'ioctx_alloc'
732 | static struct kioctx *ioctx_alloc(unsigned nr_events)
| ^~~~~~~~~~~
fs/aio.c:847:12: error: invalid storage class for function 'kill_ioctx'
847 | static int kill_ioctx(struct mm_struct *mm, struct kioctx *ctx,
| ^~~~~~~~~~
fs/aio.c: In function 'kill_ioctx':
fs/aio.c:858:15: error: assignment to 'struct kioctx_table *' from incompatible pointer type 'struct kioctx_table *' [-Wincompatible-pointer-types]
858 | table = rcu_dereference_raw(mm->ioctx_table);
| ^
fs/aio.c:859:21: warning: comparison of distinct pointer types lacks a cast [-Wcompare-distinct-pointer-types]
859 | WARN_ON(ctx != rcu_access_pointer(table->table[ctx->id]));
| ^~
include/asm-generic/bug.h:110:32: note: in definition of macro 'WARN_ON'
110 | int __ret_warn_on = !!(condition); \
| ^~~~~~~~~
In file included from include/linux/rbtree.h:24,
from include/linux/mm_types.h:11,
from include/linux/mmzone.h:22,
from include/linux/gfp.h:7,
from include/linux/xarray.h:16,
from include/linux/list_lru.h:14,
from include/linux/fs/super_types.h:7,
from include/linux/fs/super.h:5,
from include/linux/fs.h:5:
fs/aio.c: In function 'exit_aio':
include/linux/rcupdate.h:526:1: error: initialization of 'struct kioctx_table *' from incompatible pointer type 'struct kioctx_table *' [-Wincompatible-pointer-types]
526 | ({ \
| ^
include/linux/rcupdate.h:531:32: note: in expansion of macro '__rcu_dereference_raw'
531 | #define rcu_dereference_raw(p) __rcu_dereference_raw(p, __UNIQUE_ID(rcu))
| ^~~~~~~~~~~~~~~~~~~~~
fs/aio.c:893:38: note: in expansion of macro 'rcu_dereference_raw'
893 | struct kioctx_table *table = rcu_dereference_raw(mm->ioctx_table);
| ^~~~~~~~~~~~~~~~~~~
include/linux/rcupdate.h:520:1: error: initialization of 'struct kioctx *' from incompatible pointer type 'struct kioctx *' [-Wincompatible-pointer-types]
520 | ({ \
| ^
include/linux/rcupdate.h:743:9: note: in expansion of macro '__rcu_dereference_protected'
743 | __rcu_dereference_protected((p), __UNIQUE_ID(rcu), (c), __rcu)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
fs/aio.c:906:25: note: in expansion of macro 'rcu_dereference_protected'
906 | rcu_dereference_protected(table->table[i], true);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
fs/aio.c:933:13: error: invalid storage class for function 'put_reqs_available'
933 | static void put_reqs_available(struct kioctx *ctx, unsigned nr)
| ^~~~~~~~~~~~~~~~~~
fs/aio.c:950:13: error: invalid storage class for function '__get_reqs_available'
950 | static bool __get_reqs_available(struct kioctx *ctx)
| ^~~~~~~~~~~~~~~~~~~~
fs/aio.c:984:13: error: invalid storage class for function 'refill_reqs_available'
984 | static void refill_reqs_available(struct kioctx *ctx, unsigned head,
| ^~~~~~~~~~~~~~~~~~~~~
fs/aio.c:1013:13: error: invalid storage class for function 'user_refill_reqs_available'
1013 | static void user_refill_reqs_available(struct kioctx *ctx)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
fs/aio.c:1038:13: error: invalid storage class for function 'get_reqs_available'
1038 | static bool get_reqs_available(struct kioctx *ctx)
| ^~~~~~~~~~~~~~~~~~
fs/aio.c:1053:33: error: invalid storage class for function 'aio_get_req'
1053 | static inline struct aio_kiocb *aio_get_req(struct kioctx *ctx)
| ^~~~~~~~~~~
fs/aio.c:1074:23: error: invalid storage class for function 'lookup_ioctx'
1074 | static struct kioctx *lookup_ioctx(unsigned long ctx_id)
| ^~~~~~~~~~~~
fs/aio.c: In function 'lookup_ioctx':
fs/aio.c:1086:15: error: assignment to 'struct kioctx_table *' from incompatible pointer type 'struct kioctx_table *' [-Wincompatible-pointer-types]
1086 | table = rcu_dereference(mm->ioctx_table);
| ^
fs/aio.c:1092:13: error: assignment to 'struct kioctx *' from incompatible pointer type 'struct kioctx *' [-Wincompatible-pointer-types]
1092 | ctx = rcu_dereference(table->table[id]);
| ^
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
fs/aio.c:1102:20: error: invalid storage class for function 'iocb_destroy'
1102 | static inline void iocb_destroy(struct aio_kiocb *iocb)
| ^~~~~~~~~~~~
fs/aio.c:1120:13: error: invalid storage class for function 'aio_complete'
1120 | static void aio_complete(struct aio_kiocb *iocb)
| ^~~~~~~~~~~~
fs/aio.c:1205:20: error: invalid storage class for function 'iocb_put'
1205 | static inline void iocb_put(struct aio_kiocb *iocb)
| ^~~~~~~~
fs/aio.c:1217:13: error: invalid storage class for function 'aio_read_events_ring'
1217 | static long aio_read_events_ring(struct kioctx *ctx,
| ^~~~~~~~~~~~~~~~~~~~
fs/aio.c:1294:13: error: invalid storage class for function 'aio_read_events'
1294 | static bool aio_read_events(struct kioctx *ctx, long min_nr, long nr,
| ^~~~~~~~~~~~~~~
fs/aio.c:1311:13: error: invalid storage class for function 'read_events'
1311 | static long read_events(struct kioctx *ctx, long min_nr, long nr,
| ^~~~~~~~~~~
In file included from include/linux/syscalls.h:105:
>> arch/powerpc/include/asm/syscall_wrapper.h:21:21: error: invalid storage class for function '__se_sys_io_setup'
21 | static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:226:36: note: in expansion of macro 'SYSCALL_DEFINEx'
226 | #define SYSCALL_DEFINE2(name, ...) SYSCALL_DEFINEx(2, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1381:1: note: in expansion of macro 'SYSCALL_DEFINE2'
1381 | SYSCALL_DEFINE2(io_setup, unsigned, nr_events, aio_context_t __user *, ctxp)
| ^~~~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:22:28: error: invalid storage class for function '__do_sys_io_setup'
22 | static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:226:36: note: in expansion of macro 'SYSCALL_DEFINEx'
226 | #define SYSCALL_DEFINE2(name, ...) SYSCALL_DEFINEx(2, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1381:1: note: in expansion of macro 'SYSCALL_DEFINE2'
1381 | SYSCALL_DEFINE2(io_setup, unsigned, nr_events, aio_context_t __user *, ctxp)
| ^~~~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:23:14: error: static declaration of 'sys_io_setup' follows non-static declaration
23 | long sys##name(const struct pt_regs *regs) \
| ^~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:226:36: note: in expansion of macro 'SYSCALL_DEFINEx'
226 | #define SYSCALL_DEFINE2(name, ...) SYSCALL_DEFINEx(2, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1381:1: note: in expansion of macro 'SYSCALL_DEFINE2'
1381 | SYSCALL_DEFINE2(io_setup, unsigned, nr_events, aio_context_t __user *, ctxp)
| ^~~~~~~~~~~~~~~
arch/powerpc/include/asm/syscall_wrapper.h:19:14: note: previous declaration of 'sys_io_setup' with type 'long int(const struct pt_regs *)'
19 | long sys##name(const struct pt_regs *regs); \
| ^~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:226:36: note: in expansion of macro 'SYSCALL_DEFINEx'
226 | #define SYSCALL_DEFINE2(name, ...) SYSCALL_DEFINEx(2, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1381:1: note: in expansion of macro 'SYSCALL_DEFINE2'
1381 | SYSCALL_DEFINE2(io_setup, unsigned, nr_events, aio_context_t __user *, ctxp)
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'sys_io_setup':
>> arch/powerpc/include/asm/syscall_wrapper.h:25:24: error: implicit declaration of function '__se_sys_io_setup'; did you mean 'sys_io_setup'? [-Wimplicit-function-declaration]
25 | return __se_sys##name(SC_POWERPC_REGS_TO_ARGS(x,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:226:36: note: in expansion of macro 'SYSCALL_DEFINEx'
226 | #define SYSCALL_DEFINE2(name, ...) SYSCALL_DEFINEx(2, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1381:1: note: in expansion of macro 'SYSCALL_DEFINE2'
1381 | SYSCALL_DEFINE2(io_setup, unsigned, nr_events, aio_context_t __user *, ctxp)
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
arch/powerpc/include/asm/syscall_wrapper.h:27:21: error: invalid storage class for function '__se_sys_io_setup'
27 | static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:226:36: note: in expansion of macro 'SYSCALL_DEFINEx'
226 | #define SYSCALL_DEFINE2(name, ...) SYSCALL_DEFINEx(2, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1381:1: note: in expansion of macro 'SYSCALL_DEFINE2'
1381 | SYSCALL_DEFINE2(io_setup, unsigned, nr_events, aio_context_t __user *, ctxp)
| ^~~~~~~~~~~~~~~
fs/aio.c: In function '__se_sys_io_setup':
>> arch/powerpc/include/asm/syscall_wrapper.h:29:28: error: implicit declaration of function '__do_sys_io_setup'; did you mean '__se_sys_io_setup'? [-Wimplicit-function-declaration]
29 | long ret = __do_sys##name(__MAP(x,__SC_CAST,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:226:36: note: in expansion of macro 'SYSCALL_DEFINEx'
226 | #define SYSCALL_DEFINE2(name, ...) SYSCALL_DEFINEx(2, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1381:1: note: in expansion of macro 'SYSCALL_DEFINE2'
1381 | SYSCALL_DEFINE2(io_setup, unsigned, nr_events, aio_context_t __user *, ctxp)
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
arch/powerpc/include/asm/syscall_wrapper.h:34:28: error: invalid storage class for function '__do_sys_io_setup'
34 | static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:226:36: note: in expansion of macro 'SYSCALL_DEFINEx'
226 | #define SYSCALL_DEFINE2(name, ...) SYSCALL_DEFINEx(2, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1381:1: note: in expansion of macro 'SYSCALL_DEFINE2'
1381 | SYSCALL_DEFINE2(io_setup, unsigned, nr_events, aio_context_t __user *, ctxp)
| ^~~~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:21:21: error: invalid storage class for function '__se_sys_io_destroy'
21 | static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:225:36: note: in expansion of macro 'SYSCALL_DEFINEx'
225 | #define SYSCALL_DEFINE1(name, ...) SYSCALL_DEFINEx(1, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1450:1: note: in expansion of macro 'SYSCALL_DEFINE1'
1450 | SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
| ^~~~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:22:28: error: invalid storage class for function '__do_sys_io_destroy'
22 | static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:225:36: note: in expansion of macro 'SYSCALL_DEFINEx'
225 | #define SYSCALL_DEFINE1(name, ...) SYSCALL_DEFINEx(1, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1450:1: note: in expansion of macro 'SYSCALL_DEFINE1'
1450 | SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
| ^~~~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:23:14: error: static declaration of 'sys_io_destroy' follows non-static declaration
23 | long sys##name(const struct pt_regs *regs) \
| ^~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:225:36: note: in expansion of macro 'SYSCALL_DEFINEx'
225 | #define SYSCALL_DEFINE1(name, ...) SYSCALL_DEFINEx(1, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1450:1: note: in expansion of macro 'SYSCALL_DEFINE1'
1450 | SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
| ^~~~~~~~~~~~~~~
arch/powerpc/include/asm/syscall_wrapper.h:19:14: note: previous declaration of 'sys_io_destroy' with type 'long int(const struct pt_regs *)'
19 | long sys##name(const struct pt_regs *regs); \
| ^~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:225:36: note: in expansion of macro 'SYSCALL_DEFINEx'
225 | #define SYSCALL_DEFINE1(name, ...) SYSCALL_DEFINEx(1, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1450:1: note: in expansion of macro 'SYSCALL_DEFINE1'
1450 | SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'sys_io_destroy':
>> arch/powerpc/include/asm/syscall_wrapper.h:25:24: error: implicit declaration of function '__se_sys_io_destroy'; did you mean 'sys_io_destroy'? [-Wimplicit-function-declaration]
25 | return __se_sys##name(SC_POWERPC_REGS_TO_ARGS(x,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:225:36: note: in expansion of macro 'SYSCALL_DEFINEx'
225 | #define SYSCALL_DEFINE1(name, ...) SYSCALL_DEFINEx(1, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1450:1: note: in expansion of macro 'SYSCALL_DEFINE1'
1450 | SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
arch/powerpc/include/asm/syscall_wrapper.h:27:21: error: invalid storage class for function '__se_sys_io_destroy'
27 | static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:225:36: note: in expansion of macro 'SYSCALL_DEFINEx'
225 | #define SYSCALL_DEFINE1(name, ...) SYSCALL_DEFINEx(1, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1450:1: note: in expansion of macro 'SYSCALL_DEFINE1'
1450 | SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
| ^~~~~~~~~~~~~~~
fs/aio.c: In function '__se_sys_io_destroy':
>> arch/powerpc/include/asm/syscall_wrapper.h:29:28: error: implicit declaration of function '__do_sys_io_destroy'; did you mean '__se_sys_io_destroy'? [-Wimplicit-function-declaration]
29 | long ret = __do_sys##name(__MAP(x,__SC_CAST,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:225:36: note: in expansion of macro 'SYSCALL_DEFINEx'
225 | #define SYSCALL_DEFINE1(name, ...) SYSCALL_DEFINEx(1, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1450:1: note: in expansion of macro 'SYSCALL_DEFINE1'
1450 | SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
arch/powerpc/include/asm/syscall_wrapper.h:34:28: error: invalid storage class for function '__do_sys_io_destroy'
34 | static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:225:36: note: in expansion of macro 'SYSCALL_DEFINEx'
225 | #define SYSCALL_DEFINE1(name, ...) SYSCALL_DEFINEx(1, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:1450:1: note: in expansion of macro 'SYSCALL_DEFINE1'
1450 | SYSCALL_DEFINE1(io_destroy, aio_context_t, ctx)
| ^~~~~~~~~~~~~~~
fs/aio.c:1480:13: error: invalid storage class for function 'aio_remove_iocb'
1480 | static void aio_remove_iocb(struct aio_kiocb *iocb)
| ^~~~~~~~~~~~~~~
fs/aio.c:1490:13: error: invalid storage class for function 'aio_complete_rw'
1490 | static void aio_complete_rw(struct kiocb *kiocb, long res)
| ^~~~~~~~~~~~~~~
fs/aio.c:1509:12: error: invalid storage class for function 'aio_prep_rw'
1509 | static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb, int rw_type)
| ^~~~~~~~~~~
fs/aio.c:1544:16: error: invalid storage class for function 'aio_setup_rw'
1544 | static ssize_t aio_setup_rw(int rw, const struct iocb *iocb,
| ^~~~~~~~~~~~
fs/aio.c: In function 'aio_setup_rw':
fs/aio.c:1552:57: error: passing argument 4 of 'import_ubuf' from incompatible pointer type [-Wincompatible-pointer-types]
1552 | ssize_t ret = import_ubuf(rw, buf, len, iter);
| ^~~~
| |
| struct iov_iter *
In file included from fs/aio.c:23:
include/linux/uio.h:370:74: note: expected 'struct iov_iter *' but argument is of type 'struct iov_iter *'
370 | int import_ubuf(int type, void __user *buf, size_t len, struct iov_iter *i);
| ~~~~~~~~~~~~~~~~~^
fs/aio.c:1557:65: error: passing argument 6 of '__import_iovec' from incompatible pointer type [-Wincompatible-pointer-types]
1557 | return __import_iovec(rw, buf, len, UIO_FASTIOV, iovec, iter, compat);
| ^~~~
| |
| struct iov_iter *
include/linux/uio.h:369:35: note: expected 'struct iov_iter *' but argument is of type 'struct iov_iter *'
369 | struct iov_iter *i, bool compat);
| ~~~~~~~~~~~~~~~~~^
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
fs/aio.c:1560:20: error: invalid storage class for function 'aio_rw_done'
1560 | static inline void aio_rw_done(struct kiocb *req, ssize_t ret)
| ^~~~~~~~~~~
fs/aio.c:1580:12: error: invalid storage class for function 'aio_read'
1580 | static int aio_read(struct kiocb *req, const struct iocb *iocb,
| ^~~~~~~~
fs/aio.c: In function 'aio_read':
fs/aio.c:1584:25: error: storage size of 'iter' isn't known
1584 | struct iov_iter iter;
| ^~~~
fs/aio.c:1584:25: warning: unused variable 'iter' [-Wunused-variable]
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
fs/aio.c:1607:12: error: invalid storage class for function 'aio_write'
1607 | static int aio_write(struct kiocb *req, const struct iocb *iocb,
| ^~~~~~~~~
fs/aio.c: In function 'aio_write':
fs/aio.c:1611:25: error: storage size of 'iter' isn't known
1611 | struct iov_iter iter;
| ^~~~
fs/aio.c:1611:25: warning: unused variable 'iter' [-Wunused-variable]
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
fs/aio.c:1639:13: error: invalid storage class for function 'aio_fsync_work'
1639 | static void aio_fsync_work(struct work_struct *work)
| ^~~~~~~~~~~~~~
In file included from include/linux/bitmap.h:9,
from include/linux/nodemask.h:91,
from include/linux/list_lru.h:12:
fs/aio.c: In function 'aio_fsync_work':
fs/aio.c:1643:38: error: passing argument 1 of 'class_override_creds_constructor' from incompatible pointer type [-Wincompatible-pointer-types]
1643 | scoped_with_creds(iocb->fsync.creds)
| ~~~~~~~~~~~^~~~~~
| |
| struct cred *
include/linux/cleanup.h:306:32: note: in definition of macro '__scoped_class'
306 | for (CLASS(_name, var)(args); ; ({ goto _label; })) \
| ^~~~
include/linux/cred.h:195:9: note: in expansion of macro 'scoped_class'
195 | scoped_class(override_creds, __UNIQUE_ID(label), cred)
| ^~~~~~~~~~~~
fs/aio.c:1643:9: note: in expansion of macro 'scoped_with_creds'
1643 | scoped_with_creds(iocb->fsync.creds)
| ^~~~~~~~~~~~~~~~~
include/linux/cred.h:192:64: note: expected 'const struct cred *' but argument is of type 'struct cred *'
192 | override_creds(override_cred), const struct cred *override_cred)
| ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~
include/linux/cleanup.h:285:58: note: in definition of macro 'DEFINE_CLASS'
285 | static __always_inline _type class_##_name##_constructor(_init_args) \
| ^~~~~~~~~~
fs/aio.c:1646:29: error: passing argument 1 of 'put_cred' from incompatible pointer type [-Wincompatible-pointer-types]
1646 | put_cred(iocb->fsync.creds);
| ~~~~~~~~~~~^~~~~~
| |
| struct cred *
In file included from include/linux/sched/signal.h:10,
from include/linux/rcuwait.h:6,
from include/linux/percpu-rwsem.h:7,
from include/linux/fs/super_types.h:13:
include/linux/cred.h:277:48: note: expected 'const struct cred *' but argument is of type 'struct cred *'
277 | static inline void put_cred(const struct cred *cred)
| ~~~~~~~~~~~~~~~~~~~^~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
fs/aio.c:1650:12: error: invalid storage class for function 'aio_fsync'
1650 | static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
| ^~~~~~~~~
fs/aio.c: In function 'aio_fsync':
fs/aio.c:1660:20: error: assignment to 'struct cred *' from incompatible pointer type 'struct cred *' [-Wincompatible-pointer-types]
1660 | req->creds = prepare_creds();
| ^
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
fs/aio.c:1670:13: error: invalid storage class for function 'aio_poll_put_work'
1670 | static void aio_poll_put_work(struct work_struct *work)
| ^~~~~~~~~~~~~~~~~
fs/aio.c:1686:13: error: invalid storage class for function 'poll_iocb_lock_wq'
1686 | static bool poll_iocb_lock_wq(struct poll_iocb *req)
| ^~~~~~~~~~~~~~~~~
fs/aio.c:1717:13: error: invalid storage class for function 'poll_iocb_unlock_wq'
1717 | static void poll_iocb_unlock_wq(struct poll_iocb *req)
| ^~~~~~~~~~~~~~~~~~~
fs/aio.c:1723:13: error: invalid storage class for function 'aio_poll_complete_work'
1723 | static void aio_poll_complete_work(struct work_struct *work)
| ^~~~~~~~~~~~~~~~~~~~~~
fs/aio.c:1769:12: error: invalid storage class for function 'aio_poll_cancel'
1769 | static int aio_poll_cancel(struct kiocb *iocb)
| ^~~~~~~~~~~~~~~
fs/aio.c:1786:12: error: invalid storage class for function 'aio_poll_wake'
1786 | static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
| ^~~~~~~~~~~~~
fs/aio.c:1876:1: error: invalid storage class for function 'aio_poll_queue_proc'
1876 | aio_poll_queue_proc(struct file *file, struct wait_queue_head *head,
| ^~~~~~~~~~~~~~~~~~~
fs/aio.c:1893:12: error: invalid storage class for function 'aio_poll'
1893 | static int aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
| ^~~~~~~~
fs/aio.c:1968:12: error: invalid storage class for function '__io_submit_one'
1968 | static int __io_submit_one(struct kioctx *ctx, const struct iocb *iocb,
| ^~~~~~~~~~~~~~~
fs/aio.c:2022:12: error: invalid storage class for function 'io_submit_one'
2022 | static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb,
| ^~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:21:21: error: invalid storage class for function '__se_sys_io_submit'
21 | static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2081:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2081 | SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr,
| ^~~~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:22:28: error: invalid storage class for function '__do_sys_io_submit'
22 | static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2081:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2081 | SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr,
| ^~~~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:23:14: error: static declaration of 'sys_io_submit' follows non-static declaration
23 | long sys##name(const struct pt_regs *regs) \
| ^~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2081:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2081 | SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr,
| ^~~~~~~~~~~~~~~
arch/powerpc/include/asm/syscall_wrapper.h:19:14: note: previous declaration of 'sys_io_submit' with type 'long int(const struct pt_regs *)'
19 | long sys##name(const struct pt_regs *regs); \
| ^~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2081:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2081 | SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr,
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'sys_io_submit':
>> arch/powerpc/include/asm/syscall_wrapper.h:25:24: error: implicit declaration of function '__se_sys_io_submit'; did you mean 'sys_io_submit'? [-Wimplicit-function-declaration]
25 | return __se_sys##name(SC_POWERPC_REGS_TO_ARGS(x,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2081:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2081 | SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr,
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
arch/powerpc/include/asm/syscall_wrapper.h:27:21: error: invalid storage class for function '__se_sys_io_submit'
27 | static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2081:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2081 | SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr,
| ^~~~~~~~~~~~~~~
fs/aio.c: In function '__se_sys_io_submit':
>> arch/powerpc/include/asm/syscall_wrapper.h:29:28: error: implicit declaration of function '__do_sys_io_submit'; did you mean '__se_sys_io_submit'? [-Wimplicit-function-declaration]
29 | long ret = __do_sys##name(__MAP(x,__SC_CAST,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2081:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2081 | SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr,
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
arch/powerpc/include/asm/syscall_wrapper.h:34:28: error: invalid storage class for function '__do_sys_io_submit'
34 | static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2081:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2081 | SYSCALL_DEFINE3(io_submit, aio_context_t, ctx_id, long, nr,
| ^~~~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:21:21: error: invalid storage class for function '__se_sys_io_cancel'
21 | static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2175:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2175 | SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
| ^~~~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:22:28: error: invalid storage class for function '__do_sys_io_cancel'
22 | static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2175:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2175 | SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
| ^~~~~~~~~~~~~~~
>> arch/powerpc/include/asm/syscall_wrapper.h:23:14: error: static declaration of 'sys_io_cancel' follows non-static declaration
23 | long sys##name(const struct pt_regs *regs) \
| ^~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2175:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2175 | SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
| ^~~~~~~~~~~~~~~
arch/powerpc/include/asm/syscall_wrapper.h:19:14: note: previous declaration of 'sys_io_cancel' with type 'long int(const struct pt_regs *)'
19 | long sys##name(const struct pt_regs *regs); \
| ^~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2175:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2175 | SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'sys_io_cancel':
>> arch/powerpc/include/asm/syscall_wrapper.h:25:24: error: implicit declaration of function '__se_sys_io_cancel'; did you mean 'sys_io_cancel'? [-Wimplicit-function-declaration]
25 | return __se_sys##name(SC_POWERPC_REGS_TO_ARGS(x,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2175:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2175 | SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
arch/powerpc/include/asm/syscall_wrapper.h:27:21: error: invalid storage class for function '__se_sys_io_cancel'
27 | static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2175:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2175 | SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
| ^~~~~~~~~~~~~~~
fs/aio.c: In function '__se_sys_io_cancel':
>> arch/powerpc/include/asm/syscall_wrapper.h:29:28: error: implicit declaration of function '__do_sys_io_cancel'; did you mean '__se_sys_io_cancel'? [-Wimplicit-function-declaration]
29 | long ret = __do_sys##name(__MAP(x,__SC_CAST,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2175:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2175 | SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
arch/powerpc/include/asm/syscall_wrapper.h:34:28: error: invalid storage class for function '__do_sys_io_cancel'
34 | static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:227:36: note: in expansion of macro 'SYSCALL_DEFINEx'
227 | #define SYSCALL_DEFINE3(name, ...) SYSCALL_DEFINEx(3, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2175:1: note: in expansion of macro 'SYSCALL_DEFINE3'
2175 | SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
| ^~~~~~~~~~~~~~~
fs/aio.c:2217:13: error: invalid storage class for function 'do_io_getevents'
2217 | static long do_io_getevents(aio_context_t ctx_id,
| ^~~~~~~~~~~~~~~
arch/powerpc/include/asm/syscall_wrapper.h:21:21: error: invalid storage class for function '__se_sys_io_pgetevents'
21 | static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:230:36: note: in expansion of macro 'SYSCALL_DEFINEx'
230 | #define SYSCALL_DEFINE6(name, ...) SYSCALL_DEFINEx(6, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2275:1: note: in expansion of macro 'SYSCALL_DEFINE6'
2275 | SYSCALL_DEFINE6(io_pgetevents,
| ^~~~~~~~~~~~~~~
arch/powerpc/include/asm/syscall_wrapper.h:22:28: error: invalid storage class for function '__do_sys_io_pgetevents'
22 | static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:230:36: note: in expansion of macro 'SYSCALL_DEFINEx'
230 | #define SYSCALL_DEFINE6(name, ...) SYSCALL_DEFINEx(6, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2275:1: note: in expansion of macro 'SYSCALL_DEFINE6'
2275 | SYSCALL_DEFINE6(io_pgetevents,
| ^~~~~~~~~~~~~~~
arch/powerpc/include/asm/syscall_wrapper.h:23:14: error: static declaration of 'sys_io_pgetevents' follows non-static declaration
23 | long sys##name(const struct pt_regs *regs) \
| ^~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:230:36: note: in expansion of macro 'SYSCALL_DEFINEx'
230 | #define SYSCALL_DEFINE6(name, ...) SYSCALL_DEFINEx(6, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2275:1: note: in expansion of macro 'SYSCALL_DEFINE6'
2275 | SYSCALL_DEFINE6(io_pgetevents,
| ^~~~~~~~~~~~~~~
arch/powerpc/include/asm/syscall_wrapper.h:19:14: note: previous declaration of 'sys_io_pgetevents' with type 'long int(const struct pt_regs *)'
19 | long sys##name(const struct pt_regs *regs); \
| ^~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:230:36: note: in expansion of macro 'SYSCALL_DEFINEx'
230 | #define SYSCALL_DEFINE6(name, ...) SYSCALL_DEFINEx(6, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2275:1: note: in expansion of macro 'SYSCALL_DEFINE6'
2275 | SYSCALL_DEFINE6(io_pgetevents,
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'sys_io_pgetevents':
arch/powerpc/include/asm/syscall_wrapper.h:25:24: error: implicit declaration of function '__se_sys_io_pgetevents'; did you mean 'sys_io_pgetevents'? [-Wimplicit-function-declaration]
25 | return __se_sys##name(SC_POWERPC_REGS_TO_ARGS(x,__VA_ARGS__)); \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:230:36: note: in expansion of macro 'SYSCALL_DEFINEx'
230 | #define SYSCALL_DEFINE6(name, ...) SYSCALL_DEFINEx(6, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2275:1: note: in expansion of macro 'SYSCALL_DEFINE6'
2275 | SYSCALL_DEFINE6(io_pgetevents,
| ^~~~~~~~~~~~~~~
fs/aio.c: In function 'softleaf_entry_wait_on_locked':
arch/powerpc/include/asm/syscall_wrapper.h:27:21: error: invalid storage class for function '__se_sys_io_pgetevents'
27 | static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \
| ^~~~~~~~
include/linux/syscalls.h:236:9: note: in expansion of macro '__SYSCALL_DEFINEx'
236 | __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
include/linux/syscalls.h:230:36: note: in expansion of macro 'SYSCALL_DEFINEx'
230 | #define SYSCALL_DEFINE6(name, ...) SYSCALL_DEFINEx(6, _##name, __VA_ARGS__)
| ^~~~~~~~~~~~~~~
fs/aio.c:2275:1: note: in expansion of macro 'SYSCALL_DEFINE6'
2275 | SYSCALL_DEFINE6(io_pgetevents,
..
vim +49 arch/powerpc/include/asm/pgalloc.h
0186f47e703fb7 Kumar Gala 2008-11-19 6
de3b87611dd1f3 Balbir Singh 2017-05-02 7 #ifndef MODULE
de3b87611dd1f3 Balbir Singh 2017-05-02 @8 static inline gfp_t pgtable_gfp_flags(struct mm_struct *mm, gfp_t gfp)
de3b87611dd1f3 Balbir Singh 2017-05-02 9 {
de3b87611dd1f3 Balbir Singh 2017-05-02 @10 if (unlikely(mm == &init_mm))
de3b87611dd1f3 Balbir Singh 2017-05-02 11 return gfp;
de3b87611dd1f3 Balbir Singh 2017-05-02 12 return gfp | __GFP_ACCOUNT;
de3b87611dd1f3 Balbir Singh 2017-05-02 13 }
de3b87611dd1f3 Balbir Singh 2017-05-02 14 #else /* !MODULE */
de3b87611dd1f3 Balbir Singh 2017-05-02 15 static inline gfp_t pgtable_gfp_flags(struct mm_struct *mm, gfp_t gfp)
de3b87611dd1f3 Balbir Singh 2017-05-02 16 {
de3b87611dd1f3 Balbir Singh 2017-05-02 17 return gfp | __GFP_ACCOUNT;
de3b87611dd1f3 Balbir Singh 2017-05-02 18 }
de3b87611dd1f3 Balbir Singh 2017-05-02 19 #endif /* MODULE */
de3b87611dd1f3 Balbir Singh 2017-05-02 20
75f296d93bcebc Levin, Alexander (Sasha Levin 2017-11-15 21) #define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
5b6c133e080100 Michael Ellerman 2017-08-15 22
dc096864ba784c Christophe Leroy 2019-04-26 23 pte_t *pte_fragment_alloc(struct mm_struct *mm, int kernel);
dc096864ba784c Christophe Leroy 2019-04-26 24
dc096864ba784c Christophe Leroy 2019-04-26 @25 static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
dc096864ba784c Christophe Leroy 2019-04-26 26 {
dc096864ba784c Christophe Leroy 2019-04-26 27 return (pte_t *)pte_fragment_alloc(mm, 1);
dc096864ba784c Christophe Leroy 2019-04-26 28 }
dc096864ba784c Christophe Leroy 2019-04-26 29
dc096864ba784c Christophe Leroy 2019-04-26 @30 static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
dc096864ba784c Christophe Leroy 2019-04-26 31 {
dc096864ba784c Christophe Leroy 2019-04-26 32 return (pgtable_t)pte_fragment_alloc(mm, 0);
dc096864ba784c Christophe Leroy 2019-04-26 33 }
dc096864ba784c Christophe Leroy 2019-04-26 34
dc096864ba784c Christophe Leroy 2019-04-26 35 void pte_frag_destroy(void *pte_frag);
dc096864ba784c Christophe Leroy 2019-04-26 36 void pte_fragment_free(unsigned long *table, int kernel);
dc096864ba784c Christophe Leroy 2019-04-26 37
dc096864ba784c Christophe Leroy 2019-04-26 @38 static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
dc096864ba784c Christophe Leroy 2019-04-26 39 {
dc096864ba784c Christophe Leroy 2019-04-26 40 pte_fragment_free((unsigned long *)pte, 1);
dc096864ba784c Christophe Leroy 2019-04-26 41 }
dc096864ba784c Christophe Leroy 2019-04-26 42
dc096864ba784c Christophe Leroy 2019-04-26 @43 static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
dc096864ba784c Christophe Leroy 2019-04-26 44 {
dc096864ba784c Christophe Leroy 2019-04-26 45 pte_fragment_free((unsigned long *)ptepage, 0);
dc096864ba784c Christophe Leroy 2019-04-26 46 }
dc096864ba784c Christophe Leroy 2019-04-26 47
32cc0b7c9d508e Hugh Dickins 2023-07-11 48 /* arch use pte_free_defer() implementation in arch/powerpc/mm/pgtable-frag.c */
32cc0b7c9d508e Hugh Dickins 2023-07-11 @49 #define pte_free_defer pte_free_defer
32cc0b7c9d508e Hugh Dickins 2023-07-11 50 void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable);
32cc0b7c9d508e Hugh Dickins 2023-07-11 51
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v7 6/6] mm:/migrate_device.c: remove migrate_vma_collect_*()
2026-03-30 11:56 [RESEND PATCH v7 0/6] Migrate on fault for device pages mpenttil
@ 2026-03-30 11:56 ` mpenttil
0 siblings, 0 replies; 15+ messages in thread
From: mpenttil @ 2026-03-30 11:56 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Mika Penttilä, David Hildenbrand,
Jason Gunthorpe, Leon Romanovsky, Alistair Popple, Balbir Singh,
Zi Yan, Matthew Brost
From: Mika Penttilä <mpenttil@redhat.com>
With the unified fault handling and migrate path,
the migrate_vma_collect_*() functions are unused,
let's remove them.
Cc: David Hildenbrand <david@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leonro@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Balbir Singh <balbirs@nvidia.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Mika Penttilä <mpenttil@redhat.com>
---
mm/migrate_device.c | 508 --------------------------------------------
1 file changed, 508 deletions(-)
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 7ca5dc80d39b..9098b64aeb2c 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -18,514 +18,6 @@
#include <asm/tlbflush.h>
#include "internal.h"
-static int migrate_vma_collect_skip(unsigned long start,
- unsigned long end,
- struct mm_walk *walk)
-{
- struct migrate_vma *migrate = walk->private;
- unsigned long addr;
-
- for (addr = start; addr < end; addr += PAGE_SIZE) {
- migrate->dst[migrate->npages] = 0;
- migrate->src[migrate->npages++] = 0;
- }
-
- return 0;
-}
-
-static int migrate_vma_collect_hole(unsigned long start,
- unsigned long end,
- __always_unused int depth,
- struct mm_walk *walk)
-{
- struct migrate_vma *migrate = walk->private;
- unsigned long addr;
-
- /* Only allow populating anonymous memory. */
- if (!vma_is_anonymous(walk->vma))
- return migrate_vma_collect_skip(start, end, walk);
-
- if (thp_migration_supported() &&
- (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) &&
- (IS_ALIGNED(start, HPAGE_PMD_SIZE) &&
- IS_ALIGNED(end, HPAGE_PMD_SIZE))) {
- migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE |
- MIGRATE_PFN_COMPOUND;
- migrate->dst[migrate->npages] = 0;
- migrate->npages++;
- migrate->cpages++;
-
- /*
- * Collect the remaining entries as holes, in case we
- * need to split later
- */
- return migrate_vma_collect_skip(start + PAGE_SIZE, end, walk);
- }
-
- for (addr = start; addr < end; addr += PAGE_SIZE) {
- migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
- migrate->dst[migrate->npages] = 0;
- migrate->npages++;
- migrate->cpages++;
- }
-
- return 0;
-}
-
-/**
- * migrate_vma_split_folio() - Helper function to split a THP folio
- * @folio: the folio to split
- * @fault_page: struct page associated with the fault if any
- *
- * Returns 0 on success
- */
-static int migrate_vma_split_folio(struct folio *folio,
- struct page *fault_page)
-{
- int ret;
- struct folio *fault_folio = fault_page ? page_folio(fault_page) : NULL;
- struct folio *new_fault_folio = NULL;
-
- if (folio != fault_folio) {
- folio_get(folio);
- folio_lock(folio);
- }
-
- ret = split_folio(folio);
- if (ret) {
- if (folio != fault_folio) {
- folio_unlock(folio);
- folio_put(folio);
- }
- return ret;
- }
-
- new_fault_folio = fault_page ? page_folio(fault_page) : NULL;
-
- /*
- * Ensure the lock is held on the correct
- * folio after the split
- */
- if (!new_fault_folio) {
- folio_unlock(folio);
- folio_put(folio);
- } else if (folio != new_fault_folio) {
- if (new_fault_folio != fault_folio) {
- folio_get(new_fault_folio);
- folio_lock(new_fault_folio);
- }
- folio_unlock(folio);
- folio_put(folio);
- }
-
- return 0;
-}
-
-/** migrate_vma_collect_huge_pmd - collect THP pages without splitting the
- * folio for device private pages.
- * @pmdp: pointer to pmd entry
- * @start: start address of the range for migration
- * @end: end address of the range for migration
- * @walk: mm_walk callback structure
- * @fault_folio: folio associated with the fault if any
- *
- * Collect the huge pmd entry at @pmdp for migration and set the
- * MIGRATE_PFN_COMPOUND flag in the migrate src entry to indicate that
- * migration will occur at HPAGE_PMD granularity
- */
-static int migrate_vma_collect_huge_pmd(pmd_t *pmdp, unsigned long start,
- unsigned long end, struct mm_walk *walk,
- struct folio *fault_folio)
-{
- struct mm_struct *mm = walk->mm;
- struct folio *folio;
- struct migrate_vma *migrate = walk->private;
- spinlock_t *ptl;
- int ret;
- unsigned long write = 0;
-
- ptl = pmd_lock(mm, pmdp);
- if (pmd_none(*pmdp)) {
- spin_unlock(ptl);
- return migrate_vma_collect_hole(start, end, -1, walk);
- }
-
- if (pmd_trans_huge(*pmdp)) {
- if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) {
- spin_unlock(ptl);
- return migrate_vma_collect_skip(start, end, walk);
- }
-
- folio = pmd_folio(*pmdp);
- if (is_huge_zero_folio(folio)) {
- spin_unlock(ptl);
- return migrate_vma_collect_hole(start, end, -1, walk);
- }
- if (pmd_write(*pmdp))
- write = MIGRATE_PFN_WRITE;
- } else if (!pmd_present(*pmdp)) {
- const softleaf_t entry = softleaf_from_pmd(*pmdp);
-
- folio = softleaf_to_folio(entry);
-
- if (!softleaf_is_device_private(entry) ||
- !(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) ||
- (folio->pgmap->owner != migrate->pgmap_owner)) {
- spin_unlock(ptl);
- return migrate_vma_collect_skip(start, end, walk);
- }
-
- if (softleaf_is_migration(entry)) {
- softleaf_entry_wait_on_locked(entry, ptl);
- spin_unlock(ptl);
- return -EAGAIN;
- }
-
- if (softleaf_is_device_private_write(entry))
- write = MIGRATE_PFN_WRITE;
- } else {
- spin_unlock(ptl);
- return -EAGAIN;
- }
-
- folio_get(folio);
- if (folio != fault_folio && unlikely(!folio_trylock(folio))) {
- spin_unlock(ptl);
- folio_put(folio);
- return migrate_vma_collect_skip(start, end, walk);
- }
-
- if (thp_migration_supported() &&
- (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) &&
- (IS_ALIGNED(start, HPAGE_PMD_SIZE) &&
- IS_ALIGNED(end, HPAGE_PMD_SIZE))) {
-
- struct page_vma_mapped_walk pvmw = {
- .ptl = ptl,
- .address = start,
- .pmd = pmdp,
- .vma = walk->vma,
- };
-
- unsigned long pfn = page_to_pfn(folio_page(folio, 0));
-
- migrate->src[migrate->npages] = migrate_pfn(pfn) | write
- | MIGRATE_PFN_MIGRATE
- | MIGRATE_PFN_COMPOUND;
- migrate->dst[migrate->npages++] = 0;
- migrate->cpages++;
- ret = set_pmd_migration_entry(&pvmw, folio_page(folio, 0));
- if (ret) {
- migrate->npages--;
- migrate->cpages--;
- migrate->src[migrate->npages] = 0;
- migrate->dst[migrate->npages] = 0;
- goto fallback;
- }
- migrate_vma_collect_skip(start + PAGE_SIZE, end, walk);
- spin_unlock(ptl);
- return 0;
- }
-
-fallback:
- spin_unlock(ptl);
- if (!folio_test_large(folio))
- goto done;
- ret = split_folio(folio);
- if (fault_folio != folio)
- folio_unlock(folio);
- folio_put(folio);
- if (ret)
- return migrate_vma_collect_skip(start, end, walk);
- if (pmd_none(pmdp_get_lockless(pmdp)))
- return migrate_vma_collect_hole(start, end, -1, walk);
-
-done:
- return -ENOENT;
-}
-
-static int migrate_vma_collect_pmd(pmd_t *pmdp,
- unsigned long start,
- unsigned long end,
- struct mm_walk *walk)
-{
- struct migrate_vma *migrate = walk->private;
- struct vm_area_struct *vma = walk->vma;
- struct mm_struct *mm = vma->vm_mm;
- unsigned long addr = start, unmapped = 0;
- spinlock_t *ptl;
- struct folio *fault_folio = migrate->fault_page ?
- page_folio(migrate->fault_page) : NULL;
- pte_t *ptep;
-
-again:
- if (pmd_trans_huge(*pmdp) || !pmd_present(*pmdp)) {
- int ret = migrate_vma_collect_huge_pmd(pmdp, start, end, walk, fault_folio);
-
- if (ret == -EAGAIN)
- goto again;
- if (ret == 0)
- return 0;
- }
-
- ptep = pte_offset_map_lock(mm, pmdp, start, &ptl);
- if (!ptep)
- goto again;
- lazy_mmu_mode_enable();
- ptep += (addr - start) / PAGE_SIZE;
-
- for (; addr < end; addr += PAGE_SIZE, ptep++) {
- struct dev_pagemap *pgmap;
- unsigned long mpfn = 0, pfn;
- struct folio *folio;
- struct page *page;
- softleaf_t entry;
- pte_t pte;
-
- pte = ptep_get(ptep);
-
- if (pte_none(pte)) {
- if (vma_is_anonymous(vma)) {
- mpfn = MIGRATE_PFN_MIGRATE;
- migrate->cpages++;
- }
- goto next;
- }
-
- if (!pte_present(pte)) {
- /*
- * Only care about unaddressable device page special
- * page table entry. Other special swap entries are not
- * migratable, and we ignore regular swapped page.
- */
- entry = softleaf_from_pte(pte);
- if (!softleaf_is_device_private(entry))
- goto next;
-
- page = softleaf_to_page(entry);
- pgmap = page_pgmap(page);
- if (!(migrate->flags &
- MIGRATE_VMA_SELECT_DEVICE_PRIVATE) ||
- pgmap->owner != migrate->pgmap_owner)
- goto next;
-
- folio = page_folio(page);
- if (folio_test_large(folio)) {
- int ret;
-
- lazy_mmu_mode_disable();
- pte_unmap_unlock(ptep, ptl);
- ret = migrate_vma_split_folio(folio,
- migrate->fault_page);
-
- if (ret) {
- if (unmapped)
- flush_tlb_range(walk->vma, start, end);
-
- return migrate_vma_collect_skip(addr, end, walk);
- }
-
- goto again;
- }
-
- mpfn = migrate_pfn(page_to_pfn(page)) |
- MIGRATE_PFN_MIGRATE;
- if (softleaf_is_device_private_write(entry))
- mpfn |= MIGRATE_PFN_WRITE;
- } else {
- pfn = pte_pfn(pte);
- if (is_zero_pfn(pfn) &&
- (migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) {
- mpfn = MIGRATE_PFN_MIGRATE;
- migrate->cpages++;
- goto next;
- }
- page = vm_normal_page(migrate->vma, addr, pte);
- if (page && !is_zone_device_page(page) &&
- !(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) {
- goto next;
- } else if (page && is_device_coherent_page(page)) {
- pgmap = page_pgmap(page);
-
- if (!(migrate->flags &
- MIGRATE_VMA_SELECT_DEVICE_COHERENT) ||
- pgmap->owner != migrate->pgmap_owner)
- goto next;
- }
- folio = page ? page_folio(page) : NULL;
- if (folio && folio_test_large(folio)) {
- int ret;
-
- lazy_mmu_mode_disable();
- pte_unmap_unlock(ptep, ptl);
- ret = migrate_vma_split_folio(folio,
- migrate->fault_page);
-
- if (ret) {
- if (unmapped)
- flush_tlb_range(walk->vma, start, end);
-
- return migrate_vma_collect_skip(addr, end, walk);
- }
-
- goto again;
- }
- mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE;
- mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0;
- }
-
- if (!page || !page->mapping) {
- mpfn = 0;
- goto next;
- }
-
- /*
- * By getting a reference on the folio we pin it and that blocks
- * any kind of migration. Side effect is that it "freezes" the
- * pte.
- *
- * We drop this reference after isolating the folio from the lru
- * for non device folio (device folio are not on the lru and thus
- * can't be dropped from it).
- */
- folio = page_folio(page);
- folio_get(folio);
-
- /*
- * We rely on folio_trylock() to avoid deadlock between
- * concurrent migrations where each is waiting on the others
- * folio lock. If we can't immediately lock the folio we fail this
- * migration as it is only best effort anyway.
- *
- * If we can lock the folio it's safe to set up a migration entry
- * now. In the common case where the folio is mapped once in a
- * single process setting up the migration entry now is an
- * optimisation to avoid walking the rmap later with
- * try_to_migrate().
- */
- if (fault_folio == folio || folio_trylock(folio)) {
- bool anon_exclusive;
- pte_t swp_pte;
-
- flush_cache_page(vma, addr, pte_pfn(pte));
- anon_exclusive = folio_test_anon(folio) &&
- PageAnonExclusive(page);
- if (anon_exclusive) {
- pte = ptep_clear_flush(vma, addr, ptep);
-
- if (folio_try_share_anon_rmap_pte(folio, page)) {
- set_pte_at(mm, addr, ptep, pte);
- if (fault_folio != folio)
- folio_unlock(folio);
- folio_put(folio);
- mpfn = 0;
- goto next;
- }
- } else {
- pte = ptep_get_and_clear(mm, addr, ptep);
- }
-
- migrate->cpages++;
-
- /* Set the dirty flag on the folio now the pte is gone. */
- if (pte_dirty(pte))
- folio_mark_dirty(folio);
-
- /* Setup special migration page table entry */
- if (mpfn & MIGRATE_PFN_WRITE)
- entry = make_writable_migration_entry(
- page_to_pfn(page));
- else if (anon_exclusive)
- entry = make_readable_exclusive_migration_entry(
- page_to_pfn(page));
- else
- entry = make_readable_migration_entry(
- page_to_pfn(page));
- if (pte_present(pte)) {
- if (pte_young(pte))
- entry = make_migration_entry_young(entry);
- if (pte_dirty(pte))
- entry = make_migration_entry_dirty(entry);
- }
- swp_pte = swp_entry_to_pte(entry);
- if (pte_present(pte)) {
- if (pte_soft_dirty(pte))
- swp_pte = pte_swp_mksoft_dirty(swp_pte);
- if (pte_uffd_wp(pte))
- swp_pte = pte_swp_mkuffd_wp(swp_pte);
- } else {
- if (pte_swp_soft_dirty(pte))
- swp_pte = pte_swp_mksoft_dirty(swp_pte);
- if (pte_swp_uffd_wp(pte))
- swp_pte = pte_swp_mkuffd_wp(swp_pte);
- }
- set_pte_at(mm, addr, ptep, swp_pte);
-
- /*
- * This is like regular unmap: we remove the rmap and
- * drop the folio refcount. The folio won't be freed, as
- * we took a reference just above.
- */
- folio_remove_rmap_pte(folio, page, vma);
- folio_put(folio);
-
- if (pte_present(pte))
- unmapped++;
- } else {
- folio_put(folio);
- mpfn = 0;
- }
-
-next:
- migrate->dst[migrate->npages] = 0;
- migrate->src[migrate->npages++] = mpfn;
- }
-
- /* Only flush the TLB if we actually modified any entries */
- if (unmapped)
- flush_tlb_range(walk->vma, start, end);
-
- lazy_mmu_mode_disable();
- pte_unmap_unlock(ptep - 1, ptl);
-
- return 0;
-}
-
-static const struct mm_walk_ops migrate_vma_walk_ops = {
- .pmd_entry = migrate_vma_collect_pmd,
- .pte_hole = migrate_vma_collect_hole,
- .walk_lock = PGWALK_RDLOCK,
-};
-
-/*
- * migrate_vma_collect() - collect pages over a range of virtual addresses
- * @migrate: migrate struct containing all migration information
- *
- * This will walk the CPU page table. For each virtual address backed by a
- * valid page, it updates the src array and takes a reference on the page, in
- * order to pin the page until we lock it and unmap it.
- */
-static void migrate_vma_collect(struct migrate_vma *migrate)
-{
- struct mmu_notifier_range range;
-
- /*
- * Note that the pgmap_owner is passed to the mmu notifier callback so
- * that the registered device driver can skip invalidating device
- * private page mappings that won't be migrated.
- */
- mmu_notifier_range_init_owner(&range, MMU_NOTIFY_MIGRATE, 0,
- migrate->vma->vm_mm, migrate->start, migrate->end,
- migrate->pgmap_owner);
- mmu_notifier_invalidate_range_start(&range);
-
- walk_page_range(migrate->vma->vm_mm, migrate->start, migrate->end,
- &migrate_vma_walk_ops, migrate);
-
- mmu_notifier_invalidate_range_end(&range);
- migrate->end = migrate->start + (migrate->npages << PAGE_SHIFT);
-}
-
/*
* migrate_vma_check_page() - check if page is pinned or not
* @page: struct page to check
--
2.50.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
end of thread, other threads:[~2026-03-30 11:57 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-30 4:30 [PATCH v7 0/6] Migrate on fault for device pages mpenttil
2026-03-30 4:30 ` [PATCH v7 1/6] mm:/Kconfig changes for migrate " mpenttil
2026-03-30 6:20 ` Christoph Hellwig
2026-03-30 4:30 ` [PATCH v7 2/6] mm: Add helper to convert HMM pfn to migrate pfn mpenttil
2026-03-30 4:30 ` [PATCH v7 3/6] mm/hmm: do the plumbing for HMM to participate in migration mpenttil
2026-03-30 11:05 ` kernel test robot
2026-03-30 11:05 ` kernel test robot
2026-03-30 4:30 ` [PATCH v7 4/6] mm: setup device page migration in HMM pagewalk mpenttil
2026-03-30 4:30 ` [PATCH v7 5/6] mm: add new testcase for the migrate on fault case mpenttil
2026-03-30 6:21 ` Christoph Hellwig
2026-03-30 6:40 ` Mika Penttilä
2026-03-30 4:30 ` [PATCH v7 6/6] mm:/migrate_device.c: remove migrate_vma_collect_*() mpenttil
2026-03-30 6:22 ` Christoph Hellwig
2026-03-30 6:47 ` Mika Penttilä
-- strict thread matches above, loose matches on Subject: below --
2026-03-30 11:56 [RESEND PATCH v7 0/6] Migrate on fault for device pages mpenttil
2026-03-30 11:56 ` [PATCH v7 6/6] mm:/migrate_device.c: remove migrate_vma_collect_*() mpenttil
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox