From: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
To: Michael Kelley <mhklinux@outlook.com>
Cc: "kys@microsoft.com" <kys@microsoft.com>,
"haiyangz@microsoft.com" <haiyangz@microsoft.com>,
"wei.liu@kernel.org" <wei.liu@kernel.org>,
"decui@microsoft.com" <decui@microsoft.com>,
"longli@microsoft.com" <longli@microsoft.com>,
"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 2/7] mshv: Add support to address range holes remapping
Date: Mon, 20 Apr 2026 09:24:26 -0700 [thread overview]
Message-ID: <aeZTOj0htus0eoF1@skinsburskii.localdomain> (raw)
In-Reply-To: <SN6PR02MB4157D44B15BAA0F3CA8B078BD4242@SN6PR02MB4157.namprd02.prod.outlook.com>
On Mon, Apr 13, 2026 at 09:08:31PM +0000, Michael Kelley wrote:
> From: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> Sent: Monday, March 30, 2026 1:04 PM
> >
> > Consolidate memory region processing to handle both valid and invalid PFNs
> > uniformly. This eliminates code duplication across remap, unmap, share, and
> > unshare operations by using a common range processing interface.
> >
> > Holes are now remapped with no-access permissions to enable
> > hypervisor dirty page tracking for precopy live migration.
> >
> > This refactoring is a precursor to an upcoming change that will map
> > present pages in movable regions upon region creation, requiring
> > consistent handling of both mapped and unmapped ranges.
> >
> > Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
> > ---
> > drivers/hv/mshv_regions.c | 108
> > ++++++++++++++++++++++++++++++++++++++++-----
> > 1 file changed, 95 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/hv/mshv_regions.c b/drivers/hv/mshv_regions.c
> > index b1a707d16c07..ed9c55841140 100644
> > --- a/drivers/hv/mshv_regions.c
> > +++ b/drivers/hv/mshv_regions.c
> > @@ -119,6 +119,57 @@ static long mshv_region_process_pfns(struct mshv_mem_region *region,
> > return count;
> > }
> >
> > +/**
> > + * mshv_region_process_hole - Handle a hole (invalid PFNs) in a memory
> > + * region
> > + * @region : Memory region containing the hole
> > + * @flags : Flags to pass to the handler function
> > + * @pfn_offset: Starting PFN offset within the region
> > + * @pfn_count : Number of PFNs in the hole
> > + * @handler : Callback function to invoke for the hole
> > + *
> > + * Invokes the handler function for a contiguous hole with the specified
> > + * parameters.
> > + *
> > + * Return: Number of PFNs handled, or negative error code.
> > + */
> > +static long mshv_region_process_hole(struct mshv_mem_region *region,
> > + u32 flags,
> > + u64 pfn_offset, u64 pfn_count,
> > + int (*handler)(struct mshv_mem_region *region,
> > + u32 flags,
> > + u64 pfn_offset,
> > + u64 pfn_count,
> > + bool huge_page))
> > +{
> > + long ret;
> > +
> > + ret = handler(region, flags, pfn_offset, pfn_count, 0);
> > + if (ret)
> > + return ret;
> > +
> > + return pfn_count;
> > +}
> > +
> > +static long mshv_region_process_chunk(struct mshv_mem_region *region,
> > + u32 flags,
> > + u64 pfn_offset, u64 pfn_count,
> > + int (*handler)(struct mshv_mem_region *region,
> > + u32 flags,
> > + u64 pfn_offset,
> > + u64 pfn_count,
> > + bool huge_page))
> > +{
> > + if (pfn_valid(region->mreg_pfns[pfn_offset]))
> > + return mshv_region_process_pfns(region, flags,
> > + pfn_offset, pfn_count,
> > + handler);
> > + else
> > + return mshv_region_process_hole(region, flags,
> > + pfn_offset, pfn_count,
> > + handler);
> > +}
> > +
> > /**
> > * mshv_region_process_range - Processes a range of PFNs in a region.
> > * @region : Pointer to the memory region structure.
> > @@ -146,33 +197,47 @@ static int mshv_region_process_range(struct mshv_mem_region *region,
> > u64 pfn_count,
> > bool huge_page))
> > {
> > - u64 pfn_end;
> > + u64 start, end;
> > long ret;
> >
> > - if (check_add_overflow(pfn_offset, pfn_count, &pfn_end))
> > + if (!pfn_count)
> > + return 0;
> > +
> > + if (check_add_overflow(pfn_offset, pfn_count, &end))
> > return -EOVERFLOW;
> >
> > - if (pfn_end > region->nr_pfns)
> > + if (end > region->nr_pfns)
> > return -EINVAL;
> >
> > - while (pfn_count) {
> > - /* Skip non-present pages */
> > - if (!pfn_valid(region->mreg_pfns[pfn_offset])) {
> > - pfn_offset++;
> > - pfn_count--;
> > + start = pfn_offset;
> > + end = pfn_offset + 1;
> > +
> > + while (end < pfn_offset + pfn_count) {
> > + /*
> > + * Accumulate contiguous pfns with the same validity
> > + * (valid or not).
> > + */
> > + if (pfn_valid(region->mreg_pfns[start]) ==
> > + pfn_valid(region->mreg_pfns[end])) {
> > + end++;
> > continue;
> > }
> >
> > - ret = mshv_region_process_pfns(region, flags,
> > - pfn_offset, pfn_count,
> > - handler);
> > + ret = mshv_region_process_chunk(region, flags,
> > + start, end - start,
> > + handler);
> > if (ret < 0)
> > return ret;
> >
> > - pfn_offset += ret;
> > - pfn_count -= ret;
> > + start += ret;
> > }
> >
> > + ret = mshv_region_process_chunk(region, flags,
> > + start, end - start,
> > + handler);
> > + if (ret < 0)
> > + return ret;
> > +
> > return 0;
> > }
> >
> > @@ -208,6 +273,9 @@ static int mshv_region_chunk_share(struct mshv_mem_region *region,
> > u64 pfn_offset, u64 pfn_count,
> > bool huge_page)
> > {
> > + if (!pfn_valid(region->mreg_pfns[pfn_offset]))
> > + return -EINVAL;
> > +
> > if (huge_page)
> > flags |= HV_MODIFY_SPA_PAGE_HOST_ACCESS_LARGE_PAGE;
> >
> > @@ -233,6 +301,9 @@ static int mshv_region_chunk_unshare(struct mshv_mem_region *region,
> > u64 pfn_offset, u64 pfn_count,
> > bool huge_page)
> > {
> > + if (!pfn_valid(region->mreg_pfns[pfn_offset]))
> > + return -EINVAL;
> > +
> > if (huge_page)
> > flags |= HV_MODIFY_SPA_PAGE_HOST_ACCESS_LARGE_PAGE;
> >
> > @@ -256,6 +327,14 @@ static int mshv_region_chunk_remap(struct mshv_mem_region *region,
> > u64 pfn_offset, u64 pfn_count,
> > bool huge_page)
> > {
> > + /*
> > + * Remap missing pages with no access to let the
> > + * hypervisor track dirty pages, enabling precopy live
> > + * migration.
> > + */
> > + if (!pfn_valid(region->mreg_pfns[pfn_offset]))
> > + flags = HV_MAP_GPA_NO_ACCESS;
>
> Is it OK to wipe out any other flags that might be set? Certainly, any previous
> flags in PERMISSIONS_MASK should be removed, but what about ADJUSTABLE
> and NOT_CACHED?
>
Yes, this is the right approach. The HV_MAP_GPA_NO_ACCESS flag will
immediately cause a hypervisor fault on any access to the page. So
caching and adjustability no longer matter.
Thanks,
Stanislav
> > +
> > if (huge_page)
> > flags |= HV_MAP_GPA_LARGE_PAGE;
> >
> > @@ -357,6 +436,9 @@ static int mshv_region_chunk_unmap(struct mshv_mem_region *region,
> > u64 pfn_offset, u64 pfn_count,
> > bool huge_page)
> > {
> > + if (!pfn_valid(region->mreg_pfns[pfn_offset]))
> > + return 0;
> > +
> > if (huge_page)
> > flags |= HV_UNMAP_GPA_LARGE_PAGE;
> >
> >
> >
>
next prev parent reply other threads:[~2026-04-20 16:24 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-30 20:04 [PATCH 0/7] mshv: Refactor memory region management and map pages at creation Stanislav Kinsburskii
2026-03-30 20:04 ` [PATCH 1/7] mshv: Convert from page pointers to PFNs Stanislav Kinsburskii
2026-04-13 21:08 ` Michael Kelley
2026-04-20 16:21 ` Stanislav Kinsburskii
2026-04-20 17:18 ` Michael Kelley
2026-04-20 23:45 ` Stanislav Kinsburskii
2026-03-30 20:04 ` [PATCH 2/7] mshv: Add support to address range holes remapping Stanislav Kinsburskii
2026-04-13 21:08 ` Michael Kelley
2026-04-20 16:24 ` Stanislav Kinsburskii [this message]
2026-03-30 20:04 ` [PATCH 3/7] mshv: Support regions with different VMAs Stanislav Kinsburskii
2026-04-13 21:08 ` Michael Kelley
2026-04-20 16:29 ` Stanislav Kinsburskii
2026-03-30 20:04 ` [PATCH 4/7] mshv: Move pinned region setup to mshv_regions.c Stanislav Kinsburskii
2026-03-30 20:04 ` [PATCH 5/7] mshv: Map populated pages on movable region creation Stanislav Kinsburskii
2026-04-13 21:09 ` Michael Kelley
2026-04-20 16:35 ` Stanislav Kinsburskii
2026-03-30 20:04 ` [PATCH 6/7] mshv: Extract MMIO region mapping into separate function Stanislav Kinsburskii
2026-03-30 20:04 ` [PATCH 7/7] mshv: Add tracepoint for map GPA hypercall Stanislav Kinsburskii
2026-04-13 21:07 ` [PATCH 0/7] mshv: Refactor memory region management and map pages at creation Michael Kelley
2026-04-20 16:40 ` Stanislav Kinsburskii
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aeZTOj0htus0eoF1@skinsburskii.localdomain \
--to=skinsburskii@linux.microsoft.com \
--cc=decui@microsoft.com \
--cc=haiyangz@microsoft.com \
--cc=kys@microsoft.com \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=longli@microsoft.com \
--cc=mhklinux@outlook.com \
--cc=wei.liu@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox