* [PATCH] mshv: Replace fixed memory deposit with status driven helper @ 2026-02-19 22:09 Stanislav Kinsburskii 2026-02-20 17:05 ` Michael Kelley 2026-02-25 5:20 ` Anirudh Rayabharam 0 siblings, 2 replies; 5+ messages in thread From: Stanislav Kinsburskii @ 2026-02-19 22:09 UTC (permalink / raw) To: kys, haiyangz, wei.liu, decui, longli; +Cc: linux-hyperv, linux-kernel Replace hardcoded HV_MAP_GPA_DEPOSIT_PAGES usage with hv_deposit_memory() which derives the deposit size from the hypercall status, and remove the now-unused constant. The previous code always deposited a fixed 256 pages on insufficient memory, ignoring the actual demand reported by the hypervisor. hv_deposit_memory() handles different deposit statuses, aligning map-GPA retries with the rest of the codebase. This approach may require more allocation and deposit hypercall iterations, but avoids over-depositing large fixed chunks when fewer pages would suffice. Until any performance impact is measured, the more frugal and consistent behavior is preferred. Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> --- drivers/hv/mshv_root_hv_call.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c index 7f91096f95a8..317191462b63 100644 --- a/drivers/hv/mshv_root_hv_call.c +++ b/drivers/hv/mshv_root_hv_call.c @@ -16,7 +16,6 @@ /* Determined empirically */ #define HV_INIT_PARTITION_DEPOSIT_PAGES 208 -#define HV_MAP_GPA_DEPOSIT_PAGES 256 #define HV_UMAP_GPA_PAGES 512 #define HV_PAGE_COUNT_2M_ALIGNED(pg_count) (!((pg_count) & (0x200 - 1))) @@ -239,8 +238,7 @@ static int hv_do_map_gpa_hcall(u64 partition_id, u64 gfn, u64 page_struct_count, completed = hv_repcomp(status); if (hv_result_needs_memory(status)) { - ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, - HV_MAP_GPA_DEPOSIT_PAGES); + ret = hv_deposit_memory(partition_id, status); if (ret) break; ^ permalink raw reply related [flat|nested] 5+ messages in thread
* RE: [PATCH] mshv: Replace fixed memory deposit with status driven helper 2026-02-19 22:09 [PATCH] mshv: Replace fixed memory deposit with status driven helper Stanislav Kinsburskii @ 2026-02-20 17:05 ` Michael Kelley 2026-02-20 19:05 ` Mukesh R 2026-02-23 18:17 ` Stanislav Kinsburskii 2026-02-25 5:20 ` Anirudh Rayabharam 1 sibling, 2 replies; 5+ messages in thread From: Michael Kelley @ 2026-02-20 17:05 UTC (permalink / raw) To: Stanislav Kinsburskii, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, longli@microsoft.com Cc: linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org From: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> Sent: Thursday, February 19, 2026 2:10 PM > > Replace hardcoded HV_MAP_GPA_DEPOSIT_PAGES usage with > hv_deposit_memory() which derives the deposit size from > the hypercall status, and remove the now-unused constant. > > The previous code always deposited a fixed 256 pages on > insufficient memory, ignoring the actual demand reported > by the hypervisor. Does the hypervisor report a specific page count demand? I haven't seen that anywhere. It seems like the deposit memory operation is always something of a guess. > hv_deposit_memory() handles different > deposit statuses, aligning map-GPA retries with the rest > of the codebase. > > This approach may require more allocation and deposit > hypercall iterations, but avoids over-depositing large > fixed chunks when fewer pages would suffice. Until any > performance impact is measured, the more frugal and > consistent behavior is preferred. > > Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> From a purely functional standpoint, this change addresses the concern that I raised. But I don’t have any intuition on the performance impact of having to iterate. hv_deposit_memory() adds only a single page for some of the statuses, so if there really is a large memory need, the new code would iterate 256 times to achieve what the existing code does. Any idea where the 256 came from the first place? Was that empirically determined like some of the other memory deposit counts? In addition to a potential performance impact, I know the hypervisor tries to detect denial-of-service attempts that make "too many" calls to the hypervisor in a short period of time. In such a case, the hypervisor suspends scheduling the VM for a few seconds before allowing it to resume. Just need to make sure the hypervisor doesn't think the iterating is a denial-of-service attack. Or maybe that denial-of-service detection doesn't apply to the root partition VM. But from a functional standpoint, Reviewed-by: Michael Kelley <mhklinux@outlook.com> > --- > drivers/hv/mshv_root_hv_call.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c > index 7f91096f95a8..317191462b63 100644 > --- a/drivers/hv/mshv_root_hv_call.c > +++ b/drivers/hv/mshv_root_hv_call.c > @@ -16,7 +16,6 @@ > > /* Determined empirically */ > #define HV_INIT_PARTITION_DEPOSIT_PAGES 208 > -#define HV_MAP_GPA_DEPOSIT_PAGES 256 > #define HV_UMAP_GPA_PAGES 512 > > #define HV_PAGE_COUNT_2M_ALIGNED(pg_count) (!((pg_count) & (0x200 - 1))) > @@ -239,8 +238,7 @@ static int hv_do_map_gpa_hcall(u64 partition_id, u64 gfn, u64 > page_struct_count, > completed = hv_repcomp(status); > > if (hv_result_needs_memory(status)) { > - ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, > - HV_MAP_GPA_DEPOSIT_PAGES); > + ret = hv_deposit_memory(partition_id, status); > if (ret) > break; > > > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mshv: Replace fixed memory deposit with status driven helper 2026-02-20 17:05 ` Michael Kelley @ 2026-02-20 19:05 ` Mukesh R 2026-02-23 18:17 ` Stanislav Kinsburskii 1 sibling, 0 replies; 5+ messages in thread From: Mukesh R @ 2026-02-20 19:05 UTC (permalink / raw) To: Michael Kelley, Stanislav Kinsburskii, kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, longli@microsoft.com Cc: linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org On 2/20/26 09:05, Michael Kelley wrote: > From: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> Sent: Thursday, February 19, 2026 2:10 PM >> >> Replace hardcoded HV_MAP_GPA_DEPOSIT_PAGES usage with >> hv_deposit_memory() which derives the deposit size from >> the hypercall status, and remove the now-unused constant. >> >> The previous code always deposited a fixed 256 pages on >> insufficient memory, ignoring the actual demand reported >> by the hypervisor. > > Does the hypervisor report a specific page count demand? I haven't > seen that anywhere. It seems like the deposit memory operation is > always something of a guess. > >> hv_deposit_memory() handles different >> deposit statuses, aligning map-GPA retries with the rest >> of the codebase. >> >> This approach may require more allocation and deposit >> hypercall iterations, but avoids over-depositing large >> fixed chunks when fewer pages would suffice. Until any >> performance impact is measured, the more frugal and >> consistent behavior is preferred. >> >> Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> > > From a purely functional standpoint, this change addresses the > concern that I raised. But I don?t have any intuition on the performance > impact of having to iterate. hv_deposit_memory() adds only a single Indeed, it is not insignificant. Some discussions with hyp team while ago had resulted in suggestions around depositing larger sizes, but then there are many places where single page suffices. This is just lateral change. But as this thing bakes, heuristics will evolve and we'll do some optimizations aroud it... my 2 cents... Thanks, -Mukesh > page for some of the statuses, so if there really is a large memory need, > the new code would iterate 256 times to achieve what the existing code > does. > > Any idea where the 256 came from the first place? Was that > empirically determined like some of the other memory deposit counts? > > In addition to a potential performance impact, I know the hypervisor tries > to detect denial-of-service attempts that make "too many" calls to the > hypervisor in a short period of time. In such a case, the hypervisor > suspends scheduling the VM for a few seconds before allowing it to resume. > Just need to make sure the hypervisor doesn't think the iterating is a > denial-of-service attack. Or maybe that denial-of-service detection > doesn't apply to the root partition VM. > > But from a functional standpoint, > Reviewed-by: Michael Kelley <mhklinux@outlook.com> > >> --- >> drivers/hv/mshv_root_hv_call.c | 4 +--- >> 1 file changed, 1 insertion(+), 3 deletions(-) >> >> diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c >> index 7f91096f95a8..317191462b63 100644 >> --- a/drivers/hv/mshv_root_hv_call.c >> +++ b/drivers/hv/mshv_root_hv_call.c >> @@ -16,7 +16,6 @@ >> >> /* Determined empirically */ >> #define HV_INIT_PARTITION_DEPOSIT_PAGES 208 >> -#define HV_MAP_GPA_DEPOSIT_PAGES 256 >> #define HV_UMAP_GPA_PAGES 512 >> >> #define HV_PAGE_COUNT_2M_ALIGNED(pg_count) (!((pg_count) & (0x200 - 1))) >> @@ -239,8 +238,7 @@ static int hv_do_map_gpa_hcall(u64 partition_id, u64 gfn, u64 >> page_struct_count, >> completed = hv_repcomp(status); >> >> if (hv_result_needs_memory(status)) { >> - ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, >> - HV_MAP_GPA_DEPOSIT_PAGES); >> + ret = hv_deposit_memory(partition_id, status); >> if (ret) >> break; >> >> >> > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mshv: Replace fixed memory deposit with status driven helper 2026-02-20 17:05 ` Michael Kelley 2026-02-20 19:05 ` Mukesh R @ 2026-02-23 18:17 ` Stanislav Kinsburskii 1 sibling, 0 replies; 5+ messages in thread From: Stanislav Kinsburskii @ 2026-02-23 18:17 UTC (permalink / raw) To: Michael Kelley Cc: kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, longli@microsoft.com, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org On Fri, Feb 20, 2026 at 05:05:09PM +0000, Michael Kelley wrote: > From: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> Sent: Thursday, February 19, 2026 2:10 PM > > > > Replace hardcoded HV_MAP_GPA_DEPOSIT_PAGES usage with > > hv_deposit_memory() which derives the deposit size from > > the hypercall status, and remove the now-unused constant. > > > > The previous code always deposited a fixed 256 pages on > > insufficient memory, ignoring the actual demand reported > > by the hypervisor. > > Does the hypervisor report a specific page count demand? I haven't > seen that anywhere. It seems like the deposit memory operation is > always something of a guess. > Correct, it does not, except for the *CONTIGUOUS_MEMORY* status. That status indicates a need for a large contiguous block (at least 8 pages). > > hv_deposit_memory() handles different > > deposit statuses, aligning map-GPA retries with the rest > > of the codebase. > > > > This approach may require more allocation and deposit > > hypercall iterations, but avoids over-depositing large > > fixed chunks when fewer pages would suffice. Until any > > performance impact is measured, the more frugal and > > consistent behavior is preferred. > > > > Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> > > From a purely functional standpoint, this change addresses the > concern that I raised. But I don’t have any intuition on the performance > impact of having to iterate. hv_deposit_memory() adds only a single > page for some of the statuses, so if there really is a large memory need, > the new code would iterate 256 times to achieve what the existing code > does. > > Any idea where the 256 came from the first place? Was that > empirically determined like some of the other memory deposit counts? > Unfortunately, the history of this change has been lost. My guess is that it was a straightforward optimization to reduce the number of iterations. But without a clear understanding of the real memory needs or the performance impact, it was only a guess. > In addition to a potential performance impact, I know the hypervisor tries > to detect denial-of-service attempts that make "too many" calls to the > hypervisor in a short period of time. In such a case, the hypervisor > suspends scheduling the VM for a few seconds before allowing it to resume. > Just need to make sure the hypervisor doesn't think the iterating is a > denial-of-service attack. Or maybe that denial-of-service detection > doesn't apply to the root partition VM. > This deposit hypercall shouldn’t run into this issue. If it did, it would mean that starting 256 VMs at the same time would trigger the same problem, with one deposit per VM. Since there's no sign of that happening so far, I'd prefer to keep things simple and revisit it later if needed. Thanks, Stanislav > But from a functional standpoint, > Reviewed-by: Michael Kelley <mhklinux@outlook.com> > > > --- > > drivers/hv/mshv_root_hv_call.c | 4 +--- > > 1 file changed, 1 insertion(+), 3 deletions(-) > > > > diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c > > index 7f91096f95a8..317191462b63 100644 > > --- a/drivers/hv/mshv_root_hv_call.c > > +++ b/drivers/hv/mshv_root_hv_call.c > > @@ -16,7 +16,6 @@ > > > > /* Determined empirically */ > > #define HV_INIT_PARTITION_DEPOSIT_PAGES 208 > > -#define HV_MAP_GPA_DEPOSIT_PAGES 256 > > #define HV_UMAP_GPA_PAGES 512 > > > > #define HV_PAGE_COUNT_2M_ALIGNED(pg_count) (!((pg_count) & (0x200 - 1))) > > @@ -239,8 +238,7 @@ static int hv_do_map_gpa_hcall(u64 partition_id, u64 gfn, u64 > > page_struct_count, > > completed = hv_repcomp(status); > > > > if (hv_result_needs_memory(status)) { > > - ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, > > - HV_MAP_GPA_DEPOSIT_PAGES); > > + ret = hv_deposit_memory(partition_id, status); > > if (ret) > > break; > > > > > > > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mshv: Replace fixed memory deposit with status driven helper 2026-02-19 22:09 [PATCH] mshv: Replace fixed memory deposit with status driven helper Stanislav Kinsburskii 2026-02-20 17:05 ` Michael Kelley @ 2026-02-25 5:20 ` Anirudh Rayabharam 1 sibling, 0 replies; 5+ messages in thread From: Anirudh Rayabharam @ 2026-02-25 5:20 UTC (permalink / raw) To: Stanislav Kinsburskii Cc: kys, haiyangz, wei.liu, decui, longli, linux-hyperv, linux-kernel On Thu, Feb 19, 2026 at 10:09:32PM +0000, Stanislav Kinsburskii wrote: > Replace hardcoded HV_MAP_GPA_DEPOSIT_PAGES usage with > hv_deposit_memory() which derives the deposit size from > the hypercall status, and remove the now-unused constant. > > The previous code always deposited a fixed 256 pages on > insufficient memory, ignoring the actual demand reported > by the hypervisor. hv_deposit_memory() handles different > deposit statuses, aligning map-GPA retries with the rest > of the codebase. > > This approach may require more allocation and deposit > hypercall iterations, but avoids over-depositing large > fixed chunks when fewer pages would suffice. Until any > performance impact is measured, the more frugal and > consistent behavior is preferred. > > Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com> > --- > drivers/hv/mshv_root_hv_call.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c > index 7f91096f95a8..317191462b63 100644 > --- a/drivers/hv/mshv_root_hv_call.c > +++ b/drivers/hv/mshv_root_hv_call.c > @@ -16,7 +16,6 @@ > > /* Determined empirically */ > #define HV_INIT_PARTITION_DEPOSIT_PAGES 208 > -#define HV_MAP_GPA_DEPOSIT_PAGES 256 > #define HV_UMAP_GPA_PAGES 512 > > #define HV_PAGE_COUNT_2M_ALIGNED(pg_count) (!((pg_count) & (0x200 - 1))) > @@ -239,8 +238,7 @@ static int hv_do_map_gpa_hcall(u64 partition_id, u64 gfn, u64 page_struct_count, > completed = hv_repcomp(status); > > if (hv_result_needs_memory(status)) { > - ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, > - HV_MAP_GPA_DEPOSIT_PAGES); > + ret = hv_deposit_memory(partition_id, status); > if (ret) > break; > > > Reviewed-by: Anirudh Rayabharam (Microsoft) <anirudh@anirudhrb.com> ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-02-25 5:20 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-02-19 22:09 [PATCH] mshv: Replace fixed memory deposit with status driven helper Stanislav Kinsburskii 2026-02-20 17:05 ` Michael Kelley 2026-02-20 19:05 ` Mukesh R 2026-02-23 18:17 ` Stanislav Kinsburskii 2026-02-25 5:20 ` Anirudh Rayabharam
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox