* RE: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
2026-04-16 18:35 [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs Dexuan Cui
@ 2026-04-16 19:58 ` Dexuan Cui
2026-04-17 18:19 ` Hardik Garg
` (3 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Dexuan Cui @ 2026-04-16 19:58 UTC (permalink / raw)
To: Dexuan Cui, KY Srinivasan, Haiyang Zhang, wei.liu@kernel.org,
Long Li, linux-hyperv@vger.kernel.org,
linux-kernel@vger.kernel.org, mhklinux@outlook.com,
matthew.ruffell@canonical.com, johansen@templeofstupid.com
Cc: stable@vger.kernel.org
> Subject: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on
> Gen2 VMs
Sorry for the typo in the subject -- the "logc" should be "logic". If this is the only
issue, I guess Wei can fix it for me :-)
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
2026-04-16 18:35 [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs Dexuan Cui
2026-04-16 19:58 ` Dexuan Cui
@ 2026-04-17 18:19 ` Hardik Garg
2026-04-17 20:24 ` Krister Johansen
` (2 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Hardik Garg @ 2026-04-17 18:19 UTC (permalink / raw)
To: Dexuan Cui, kys, haiyangz, wei.liu, longli, linux-hyperv,
linux-kernel, mhklinux, matthew.ruffell, johansen
Cc: stable
On 4/16/2026 11:35 AM, Dexuan Cui wrote:
> If vmbus_reserve_fb() in the kdump kernel fails to properly reserve the
> framebuffer MMIO range due to a Gen2 VM's screen.lfb_base being zero [1],
> there is an MMIO conflict between the drivers hyperv_drm and pci-hyperv.
> This is especially an issue if pci-hyperv is built-in and hyperv_drm is
> built as a module. Consequently, the kdump kernel fails to detect PCI
> devices via pci-hyperv, and may fail to mount the root file system,
> which may reside in a NVMe disk.
>
> On Gen2 VMs, if the screen.lfb_base is 0 in the kdump kernel, fall
> back to the low MMIO base, which should be equal to the framebuffer
> MMIO base (Tested on x64 Windows Server 2016, and on x64 and ARM64 Windows
> Server 2025 and on Azure) [2]. In the first kernel, screen.lfb_base
> is not 0; if the user specifies a high resolution, it's not enough to
> only reserve 8MB: in this case, reserve half of the space below 4GB, but
> cap the reservation to 128MB, which is the required framebuffer size of
> the highest resolution 7680*4320 supported by Hyper-V.
>
> Add the cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) check, because a CoCo
> VM (i.e. Confidential VM) on Hyper-V doesn't have any framebuffer
> device, so there is no need to reserve any MMIO for it.
>
> While at it, fix the comparison "end > VTPM_BASE_ADDRESS" by changing
> the > to >=. Here the 'end' is an inclusive end (typically, it's
> 0xFFFF_FFFF).
>
> [1] https://lore.kernel.org/all/SA1PR21MB692176C1BC53BFC9EAE5CF8EBF51A@SA1PR21MB6921.namprd21.prod.outlook.com/
> [2] https://lore.kernel.org/all/SA1PR21MB69218F955B62DFF62E3E88D2BF222@SA1PR21MB6921.namprd21.prod.outlook.com/
>
> Fixes: 4daace0d8ce8 ("PCI: hv: Add paravirtual PCI front-end for Microsoft Hyper-V VMs")
> CC: stable@vger.kernel.org
> Signed-off-by: Dexuan Cui <decui@microsoft.com>
> ---
> drivers/hv/vmbus_drv.c | 30 ++++++++++++++++++++++++++++--
> 1 file changed, 28 insertions(+), 2 deletions(-)
Reviewed-by: Hardik Garg <hargar@linux.microsoft.com>
Thanks,
Hardik
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
2026-04-16 18:35 [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs Dexuan Cui
2026-04-16 19:58 ` Dexuan Cui
2026-04-17 18:19 ` Hardik Garg
@ 2026-04-17 20:24 ` Krister Johansen
2026-04-21 6:51 ` Matthew Ruffell
2026-04-23 17:40 ` Michael Kelley
2026-04-30 16:33 ` kernel test robot
4 siblings, 1 reply; 11+ messages in thread
From: Krister Johansen @ 2026-04-17 20:24 UTC (permalink / raw)
To: Dexuan Cui
Cc: kys, haiyangz, wei.liu, longli, linux-hyperv, linux-kernel,
mhklinux, matthew.ruffell, stable
On Thu, Apr 16, 2026 at 11:35:29AM -0700, Dexuan Cui wrote:
> If vmbus_reserve_fb() in the kdump kernel fails to properly reserve the
> framebuffer MMIO range due to a Gen2 VM's screen.lfb_base being zero [1],
> there is an MMIO conflict between the drivers hyperv_drm and pci-hyperv.
> This is especially an issue if pci-hyperv is built-in and hyperv_drm is
> built as a module. Consequently, the kdump kernel fails to detect PCI
> devices via pci-hyperv, and may fail to mount the root file system,
> which may reside in a NVMe disk.
>
> On Gen2 VMs, if the screen.lfb_base is 0 in the kdump kernel, fall
> back to the low MMIO base, which should be equal to the framebuffer
> MMIO base (Tested on x64 Windows Server 2016, and on x64 and ARM64 Windows
> Server 2025 and on Azure) [2]. In the first kernel, screen.lfb_base
> is not 0; if the user specifies a high resolution, it's not enough to
> only reserve 8MB: in this case, reserve half of the space below 4GB, but
> cap the reservation to 128MB, which is the required framebuffer size of
> the highest resolution 7680*4320 supported by Hyper-V.
>
> Add the cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) check, because a CoCo
> VM (i.e. Confidential VM) on Hyper-V doesn't have any framebuffer
> device, so there is no need to reserve any MMIO for it.
>
> While at it, fix the comparison "end > VTPM_BASE_ADDRESS" by changing
> the > to >=. Here the 'end' is an inclusive end (typically, it's
> 0xFFFF_FFFF).
>
> [1] https://lore.kernel.org/all/SA1PR21MB692176C1BC53BFC9EAE5CF8EBF51A@SA1PR21MB6921.namprd21.prod.outlook.com/
> [2] https://lore.kernel.org/all/SA1PR21MB69218F955B62DFF62E3E88D2BF222@SA1PR21MB6921.namprd21.prod.outlook.com/
>
> Fixes: 4daace0d8ce8 ("PCI: hv: Add paravirtual PCI front-end for Microsoft Hyper-V VMs")
> CC: stable@vger.kernel.org
> Signed-off-by: Dexuan Cui <decui@microsoft.com>
> ---
> drivers/hv/vmbus_drv.c | 30 ++++++++++++++++++++++++++++--
> 1 file changed, 28 insertions(+), 2 deletions(-)
Thanks for the updated patch. I tested this on the arm64 instances that
had been failing and was able to confirm that without it present the
failure still occurred, but with the new patch networking was able to
attach correctly in the dump environment and kdumps were successful.
Tested-by: Krister Johansen <kjlx@templeofstupid.com>
-K
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
2026-04-17 20:24 ` Krister Johansen
@ 2026-04-21 6:51 ` Matthew Ruffell
0 siblings, 0 replies; 11+ messages in thread
From: Matthew Ruffell @ 2026-04-21 6:51 UTC (permalink / raw)
To: Krister Johansen
Cc: Dexuan Cui, kys, haiyangz, wei.liu, longli, linux-hyperv,
linux-kernel, mhklinux, stable
Thanks Dexuan for all your hard work and analysis on this patch.
I have tested this patch on Azure with:
- Standard_D4ads_v5
- Standard_D4ads_v6
with the following images:
"Ubuntu Server 22.04 LTS - x64 Gen2"
"Ubuntu Server 24.04 LTS - x64 Gen2"
with the following kernels:
- 7.1 merge window at c1f49dea2b8f335813d3b348fd39117fb8efb428
- 7.1 merge window at c1f49dea2b8f335813d3b348fd39117fb8efb428 + this patch
Without this patch, I could reproduce the issue on 22.04 + v6 based instance
types.
I can confirm that with this patch, v6 instance types can correctly kdump and
create a vmcore correctly and restart correctly without running into
MMIO issues.
I can confirm that with this patch, v5 instance types continue to operate the
same as they did previously.
Tested-by: Matthew Ruffell <matthew.ruffell@canonical.com>
On Sat, 18 Apr 2026 at 08:24, Krister Johansen <kjlx@templeofstupid.com> wrote:
>
> On Thu, Apr 16, 2026 at 11:35:29AM -0700, Dexuan Cui wrote:
> > If vmbus_reserve_fb() in the kdump kernel fails to properly reserve the
> > framebuffer MMIO range due to a Gen2 VM's screen.lfb_base being zero [1],
> > there is an MMIO conflict between the drivers hyperv_drm and pci-hyperv.
> > This is especially an issue if pci-hyperv is built-in and hyperv_drm is
> > built as a module. Consequently, the kdump kernel fails to detect PCI
> > devices via pci-hyperv, and may fail to mount the root file system,
> > which may reside in a NVMe disk.
> >
> > On Gen2 VMs, if the screen.lfb_base is 0 in the kdump kernel, fall
> > back to the low MMIO base, which should be equal to the framebuffer
> > MMIO base (Tested on x64 Windows Server 2016, and on x64 and ARM64 Windows
> > Server 2025 and on Azure) [2]. In the first kernel, screen.lfb_base
> > is not 0; if the user specifies a high resolution, it's not enough to
> > only reserve 8MB: in this case, reserve half of the space below 4GB, but
> > cap the reservation to 128MB, which is the required framebuffer size of
> > the highest resolution 7680*4320 supported by Hyper-V.
> >
> > Add the cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) check, because a CoCo
> > VM (i.e. Confidential VM) on Hyper-V doesn't have any framebuffer
> > device, so there is no need to reserve any MMIO for it.
> >
> > While at it, fix the comparison "end > VTPM_BASE_ADDRESS" by changing
> > the > to >=. Here the 'end' is an inclusive end (typically, it's
> > 0xFFFF_FFFF).
> >
> > [1] https://lore.kernel.org/all/SA1PR21MB692176C1BC53BFC9EAE5CF8EBF51A@SA1PR21MB6921.namprd21.prod.outlook.com/
> > [2] https://lore.kernel.org/all/SA1PR21MB69218F955B62DFF62E3E88D2BF222@SA1PR21MB6921.namprd21.prod.outlook.com/
> >
> > Fixes: 4daace0d8ce8 ("PCI: hv: Add paravirtual PCI front-end for Microsoft Hyper-V VMs")
> > CC: stable@vger.kernel.org
> > Signed-off-by: Dexuan Cui <decui@microsoft.com>
> > ---
> > drivers/hv/vmbus_drv.c | 30 ++++++++++++++++++++++++++++--
> > 1 file changed, 28 insertions(+), 2 deletions(-)
>
> Thanks for the updated patch. I tested this on the arm64 instances that
> had been failing and was able to confirm that without it present the
> failure still occurred, but with the new patch networking was able to
> attach correctly in the dump environment and kdumps were successful.
>
> Tested-by: Krister Johansen <kjlx@templeofstupid.com>
>
> -K
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
2026-04-16 18:35 [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs Dexuan Cui
` (2 preceding siblings ...)
2026-04-17 20:24 ` Krister Johansen
@ 2026-04-23 17:40 ` Michael Kelley
2026-04-29 3:12 ` Dexuan Cui
2026-04-30 16:33 ` kernel test robot
4 siblings, 1 reply; 11+ messages in thread
From: Michael Kelley @ 2026-04-23 17:40 UTC (permalink / raw)
To: Dexuan Cui, kys@microsoft.com, haiyangz@microsoft.com,
wei.liu@kernel.org, longli@microsoft.com,
linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
matthew.ruffell@canonical.com, johansen@templeofstupid.com
Cc: stable@vger.kernel.org
From: Dexuan Cui <decui@microsoft.com> Sent: Thursday, April 16, 2026 11:35 AM
>
> If vmbus_reserve_fb() in the kdump kernel fails to properly reserve the
This problem has wider scope than just kdump. Any kexec'ed kernel would see
the same problem, though kdump is probably the most common case. But the
discussion here, and the mention of kdump in the code comments, should be
adjusted accordingly.
> framebuffer MMIO range due to a Gen2 VM's screen.lfb_base being zero [1],
> there is an MMIO conflict between the drivers hyperv_drm and pci-hyperv.
You describe an MMIO "conflict" without giving the details. Is that
intentional to keep the commit message from being too long? It might be
helpful to future readers to say a little more about how PCI devices must not
use MMIO space that the hypervisor has assigned to the frame buffer.
> This is especially an issue if pci-hyperv is built-in and hyperv_drm is
> built as a module. Consequently, the kdump kernel fails to detect PCI
> devices via pci-hyperv, and may fail to mount the root file system,
> which may reside in a NVMe disk.
It might not just be pci-hyperv that conflicts. The recently submitted
dxgkrnl driver also does vmbus_allocate_mmio(), but I haven't looked
at the details of exactly what it is doing.
>
> On Gen2 VMs, if the screen.lfb_base is 0 in the kdump kernel, fall
> back to the low MMIO base, which should be equal to the framebuffer
> MMIO base (Tested on x64 Windows Server 2016, and on x64 and ARM64 Windows
> Server 2025 and on Azure) [2]. In the first kernel, screen.lfb_base
> is not 0; if the user specifies a high resolution, it's not enough to
> only reserve 8MB: in this case, reserve half of the space below 4GB, but
> cap the reservation to 128MB, which is the required framebuffer size of
> the highest resolution 7680*4320 supported by Hyper-V.
As you noted in the detailed discussion in the other email thread [2],
there's a Gen1 VM case that this patch doesn't fix. For completeness,
perhaps that case should be called out in this commit message.
>
> Add the cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) check, because a CoCo
> VM (i.e. Confidential VM) on Hyper-V doesn't have any framebuffer
> device, so there is no need to reserve any MMIO for it.
>
> While at it, fix the comparison "end > VTPM_BASE_ADDRESS" by changing
> the > to >=. Here the 'end' is an inclusive end (typically, it's
> 0xFFFF_FFFF).
>
> [1] https://lore.kernel.org/all/SA1PR21MB692176C1BC53BFC9EAE5CF8EBF51A@SA1PR21MB6921.namprd21.prod.outlook.com/
> [2] https://lore.kernel.org/all/SA1PR21MB69218F955B62DFF62E3E88D2BF222@SA1PR21MB6921.namprd21.prod.outlook.com/
>
> Fixes: 4daace0d8ce8 ("PCI: hv: Add paravirtual PCI front-end for Microsoft Hyper-V VMs")
> CC: stable@vger.kernel.org
> Signed-off-by: Dexuan Cui <decui@microsoft.com>
> ---
> drivers/hv/vmbus_drv.c | 30 ++++++++++++++++++++++++++++--
> 1 file changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
> index f0d0803d1e16..a0b34f9e426a 100644
> --- a/drivers/hv/vmbus_drv.c
> +++ b/drivers/hv/vmbus_drv.c
> @@ -37,6 +37,7 @@
> #include <linux/dma-map-ops.h>
> #include <linux/pci.h>
> #include <linux/export.h>
> +#include <linux/cc_platform.h>
> #include <clocksource/hyperv_timer.h>
> #include <asm/mshyperv.h>
> #include "hyperv_vmbus.h"
> @@ -2327,8 +2328,8 @@ static acpi_status vmbus_walk_resources(struct acpi_resource *res, void *ctx)
> return AE_NO_MEMORY;
>
> /* If this range overlaps the virtual TPM, truncate it. */
> - if (end > VTPM_BASE_ADDRESS && start < VTPM_BASE_ADDRESS)
> - end = VTPM_BASE_ADDRESS;
> + if (end >= VTPM_BASE_ADDRESS && start < VTPM_BASE_ADDRESS)
> + end = VTPM_BASE_ADDRESS - 1;
>
> new_res->name = "hyperv mmio";
> new_res->flags = IORESOURCE_MEM;
> @@ -2395,13 +2396,36 @@ static void vmbus_mmio_remove(void)
> static void __maybe_unused vmbus_reserve_fb(void)
> {
> resource_size_t start = 0, size;
> + resource_size_t low_mmio_base;
> struct pci_dev *pdev;
>
> + /* Hyper-V CoCo guests do not have a framebuffer device. */
> + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
> + return;
This test is testing feature "A" (mem encryption) in order to determine
the presence of feature "B" (no framebuffer), because current
configurations happen to always have "A" and "B" at the same time. But
the linkage between the features is tenuous, and if configurations should
change in the future, testing this way could be bogus. It works now, but I'm
leery of depending on the linkage between "A" and "B".
You could set up a "can_have_framebuffer" flag in ms_hyperv_init_platform()
if running in a CVM, and test that flag here. But I'd suggest just dropping
this optimization. CVMs are always Gen2 (and that's not going to change),
so they have plenty of low mmio space. And at the moment, CVMs don't
support PCI devices, so can't encounter a conflict (though conceivably
some new flavor of CVM in the future could support PCI devices).
> +
> if (efi_enabled(EFI_BOOT)) {
> /* Gen2 VM: get FB base from EFI framebuffer */
> if (IS_ENABLED(CONFIG_SYSFB)) {
> start = sysfb_primary_display.screen.lfb_base;
> size = max_t(__u32, sysfb_primary_display.screen.lfb_size, 0x800000);
> +
> + low_mmio_base = hyperv_mmio->start;
> + if (!low_mmio_base || low_mmio_base >= SZ_4G ||
> + (start && start < low_mmio_base)) {
> + pr_warn("Unexpected low mmio base 0x%pa\n", &low_mmio_base);
> + } else {
> + /*
> + * If the kdump kernel's lfb_base is 0,
As mentioned earlier, this case isn't just kdump kernels.
> + * fall back to the low mmio base.
> + */
> + if (!start)
> + start = low_mmio_base;
> + /*
> + * Reserve half of the space below 4GB for high
> + * resolutions, but cap the reservation to 128MB.
> + */
> + size = min((SZ_4G - start) / 2, SZ_128M);
> + }
> }
> } else {
> /* Gen1 VM: get FB base from PCI */
> @@ -2433,6 +2457,8 @@ static void __maybe_unused vmbus_reserve_fb(void)
> */
> for (; !fb_mmio && (size >= 0x100000); size >>= 1)
> fb_mmio = __request_region(hyperv_mmio, start, size, fb_mmio_name, 0);
Just above this "for" loop, "start" is tested for 0. This patch eliminates the main
reason start might be 0. But I guess it's still possible that the legacy PCI device BAR
might return 0 for a Gen1 VM? Or you might get 0 if the pr_warn() about low
mmio base is triggered. But I'm thinking maybe a pr_warn() should be done if
start is zero.
> +
> + pr_info("hv_mmio=%pR,%pR fb=%pR\n", hyperv_mmio, hyperv_mmio->sibling, fb_mmio);
Outputting the above info is nice!
Michael
^ permalink raw reply [flat|nested] 11+ messages in thread* RE: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
2026-04-23 17:40 ` Michael Kelley
@ 2026-04-29 3:12 ` Dexuan Cui
2026-04-29 18:01 ` Michael Kelley
0 siblings, 1 reply; 11+ messages in thread
From: Dexuan Cui @ 2026-04-29 3:12 UTC (permalink / raw)
To: Michael Kelley, KY Srinivasan, Haiyang Zhang, wei.liu@kernel.org,
Long Li, linux-hyperv@vger.kernel.org,
linux-kernel@vger.kernel.org, matthew.ruffell@canonical.com,
johansen@templeofstupid.com
Cc: stable@vger.kernel.org
> From: Michael Kelley <mhklinux@outlook.com>
> Sent: Thursday, April 23, 2026 10:40 AM
Sorry for the late response! I got sidetracked by something else.
> > If vmbus_reserve_fb() in the kdump kernel fails to properly reserve the
>
> This problem has wider scope than just kdump. Any kexec'ed kernel would see
> the same problem, though kdump is probably the most common case. But the
> discussion here, and the mention of kdump in the code comments, should be
> adjusted accordingly.
Agreed. I'll post v2, which will use "kdump/kexec".
> > framebuffer MMIO range due to a Gen2 VM's screen.lfb_base being zero [1],
> > there is an MMIO conflict between the drivers hyperv_drm and pci-hyperv.
>
> You describe an MMIO "conflict" without giving the details. Is that
> intentional to keep the commit message from being too long? It might be
Yes.
> helpful to future readers to say a little more about how PCI devices must not
> use MMIO space that the hypervisor has assigned to the frame buffer.
Will do.
> As you noted in the detailed discussion in the other email thread [2],
> there's a Gen1 VM case that this patch doesn't fix. For completeness,
> perhaps that case should be called out in this commit message.
Will do.
> > + /* Hyper-V CoCo guests do not have a framebuffer device. */
> > + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
> > + return;
>
> This test is testing feature "A" (mem encryption) in order to determine
> the presence of feature "B" (no framebuffer), because current
> configurations happen to always have "A" and "B" at the same time. But
> the linkage between the features is tenuous, and if configurations should
> change in the future, testing this way could be bogus. It works now, but I'm
> leery of depending on the linkage between "A" and "B".
>
> You could set up a "can_have_framebuffer" flag in ms_hyperv_init_platform()
> if running in a CVM, and test that flag here. But I'd suggest just dropping
> this optimization. CVMs are always Gen2 (and that's not going to change),
> so they have plenty of low mmio space.
This is not true on a lab host, e.g. I have a TDX VM on a lab host created
by these 2 commands (without the 2nd command, Hyper-V won't allow
the TDX VM to start):
New-VM -Generation 2 -GuestStateIsolationType Tdx -Name $vmName
Disable-VMConsoleSupport -VMName $vmName
The low_mmio_base is still 4GB-128MB. In this case, it's not a good idea
to try to reserve the 128MB:
1) the available low MMIO size is smaller than 128MB due to the vTPM
MMIO range.
2) even if we can reserve the 109.25 low mmio range
[0xf8000000-0xfed3ffff], we may not want to do that, just in case
some assigned PCI device has 32-bit BARs.
So, IMO we need to keep the check:
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
+ return;
BTW, I think this may be a slightly better check here:
+ if (hv_is_isolation_supported())
+ return;
A CVM on Hyper-V won't start without the command line
Disable-VMConsoleSupport -VMName $vmName
IMO this is very unlikely to change in the future, because the Hyper-V
synthetic framebuffer VMBus device is not a trusted device for a CVM,
so there is no reason for Hyper-V to offer such a device to CVMs; even
if the host offers it, currently the guest hv_vmbus driver ignores it.
When we assign a physical PCI GPU device to a CVM, I'm not sure if there
is any framebuffer from the GPU or not. Even if there is, that's a completely
different scenario and not reserving some low MMIO for "framebuffer"
is unrelated: I think hyperv_drm (or the deprecated hyperv_fb) is the only
driver that sets the fb_overlap_ok parameter of vmbus_allocate_mmio().
> And at the moment, CVMs don't
> support PCI devices,
This is not true: recently I created a "Standard DC16eds v6" TDX CVM
on Azure, and I did see two NVMe local temporary disks in "nvme list"
(here TDISP is not used). In 2023, we added the commit
2c6ba4216844 ("PCI: hv: Enable PCI pass-thru devices in Confidential VMs")
and I believe some users are running CVMs with GPUs.
> so can't encounter a conflict (though conceivably
Correct, since there is no legacy or synthetic framebuffer device for CVMs.
> some new flavor of CVM in the future could support PCI devices).
>
> > +
> > if (efi_enabled(EFI_BOOT)) {
> > /* Gen2 VM: get FB base from EFI framebuffer */
> > if (IS_ENABLED(CONFIG_SYSFB)) {
> > start = sysfb_primary_display.screen.lfb_base;
> > size = max_t(__u32,
> sysfb_primary_display.screen.lfb_size, 0x800000);
> > +
> > + low_mmio_base = hyperv_mmio->start;
> > + if (!low_mmio_base || low_mmio_base >= SZ_4G ||
> > + (start && start < low_mmio_base)) {
> > + pr_warn("Unexpected low mmio base
> 0x%pa\n", &low_mmio_base);
> > + } else {
> > + /*
> > + * If the kdump kernel's lfb_base is 0,
>
> As mentioned earlier, this case isn't just kdump kernels.
Yes, the first kernel also runs here with a non-zero 'start'.
>
> > + * fall back to the low mmio base.
> > + */
> > + if (!start)
> > + start = low_mmio_base;
> > + /*
> > + * Reserve half of the space below 4GB for
> high
> > + * resolutions, but cap the reservation to
> 128MB.
> > + */
> > + size = min((SZ_4G - start) / 2, SZ_128M);
> > + }
> > }
> > } else {
> > /* Gen1 VM: get FB base from PCI */
> > @@ -2433,6 +2457,8 @@ static void __maybe_unused
> vmbus_reserve_fb(void)
> > */
> > for (; !fb_mmio && (size >= 0x100000); size >>= 1)
> > fb_mmio = __request_region(hyperv_mmio, start, size,
> fb_mmio_name, 0);
>
> Just above this "for" loop, "start" is tested for 0. This patch eliminates the main
> reason start might be 0. But I guess it's still possible that the legacy PCI device
> BAR might return 0 for a Gen1 VM?
IMO the legacy PCI BAR's base in a Gen1 VM can't be 0.
> Or you might get 0 if the pr_warn() about low
> mmio base is triggered. But I'm thinking maybe a pr_warn() should be done if
> start is zero.
Ok, will add a pr_warn() here.
> > +
> > + pr_info("hv_mmio=%pR,%pR fb=%pR\n", hyperv_mmio,
> hyperv_mmio->sibling, fb_mmio);
>
> Outputting the above info is nice!
>
> Michael
Thanks for all the good input! Will post v2 for review.
Thanks,
Dexuan
^ permalink raw reply [flat|nested] 11+ messages in thread* RE: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
2026-04-29 3:12 ` Dexuan Cui
@ 2026-04-29 18:01 ` Michael Kelley
2026-04-30 22:16 ` Dexuan Cui
0 siblings, 1 reply; 11+ messages in thread
From: Michael Kelley @ 2026-04-29 18:01 UTC (permalink / raw)
To: Dexuan Cui, Michael Kelley, KY Srinivasan, Haiyang Zhang,
wei.liu@kernel.org, Long Li, linux-hyperv@vger.kernel.org,
linux-kernel@vger.kernel.org, matthew.ruffell@canonical.com,
johansen@templeofstupid.com
Cc: stable@vger.kernel.org
From: Dexuan Cui <DECUI@microsoft.com> Sent: Tuesday, April 28, 2026 8:13 PM
> > From: Michael Kelley <mhklinux@outlook.com> Sent: Thursday, April 23, 2026 10:40 AM
[snip]
> > > + /* Hyper-V CoCo guests do not have a framebuffer device. */
> > > + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
> > > + return;
> >
> > This test is testing feature "A" (mem encryption) in order to determine
> > the presence of feature "B" (no framebuffer), because current
> > configurations happen to always have "A" and "B" at the same time. But
> > the linkage between the features is tenuous, and if configurations should
> > change in the future, testing this way could be bogus. It works now, but I'm
> > leery of depending on the linkage between "A" and "B".
> >
> > You could set up a "can_have_framebuffer" flag in ms_hyperv_init_platform()
> > if running in a CVM, and test that flag here. But I'd suggest just dropping
> > this optimization. CVMs are always Gen2 (and that's not going to change),
> > so they have plenty of low mmio space.
>
> This is not true on a lab host, e.g. I have a TDX VM on a lab host created
> by these 2 commands (without the 2nd command, Hyper-V won't allow
> the TDX VM to start):
>
> New-VM -Generation 2 -GuestStateIsolationType Tdx -Name $vmName
> Disable-VMConsoleSupport -VMName $vmName
>
> The low_mmio_base is still 4GB-128MB. In this case, it's not a good idea
> to try to reserve the 128MB:
>
> 1) the available low MMIO size is smaller than 128MB due to the vTPM
> MMIO range.
>
> 2) even if we can reserve the 109.25 low mmio range
> [0xf8000000-0xfed3ffff], we may not want to do that, just in case
> some assigned PCI device has 32-bit BARs.
>
> So, IMO we need to keep the check:
> + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
> + return;
>
> BTW, I think this may be a slightly better check here:
> + if (hv_is_isolation_supported())
> + return;
Agreed. Using hv_is_isolation_supported() seems better than
cc_platform_has() for this purpose.
>
> A CVM on Hyper-V won't start without the command line
> Disable-VMConsoleSupport -VMName $vmName
Unfortunately, on my laptop Hyper-V, a VM with VBS Isolation appears
to *not* require Disable-VMConsoleSupport. I can start the VM, and the
VM is offered the VMBus synthvid, mouse, and keyboard devices.
But what's weird in this case is that vmbus_reserved_fb() sees lfb_base
and lfb_start as 0. Furthermore, as a test, I changed the "allowed_in_isolated"
flag to true for the synthvid device, and the Hyper-V DRM driver loads and
initializes. In doing so, the vmconnect.exe window is resized larger, as is
done in a normal VM. /proc/iomem shows that the DRM driver claimed
the expected MMIO range at the start of low MMIO space. I can run a user
space program that mmaps /dev/fb0 and writes pixels to the mmap'ed
memory, and that succeeds as it would in a normal VM, but the
vmconnect.exe window doesn't show anything. It appears that the Hyper-V
host has allocated memory for the frame buffer, but is ignoring anything
that is written to it.
Running Disable-VMConsoleSupport works as expected -- the synthvid,
mouse, and keyboard devices are no longer offered to the VM.
>
> IMO this is very unlikely to change in the future, because the Hyper-V
> synthetic framebuffer VMBus device is not a trusted device for a CVM,
> so there is no reason for Hyper-V to offer such a device to CVMs; even
> if the host offers it, currently the guest hv_vmbus driver ignores it.
>
In the case of VBS Isolation, if such a VM also had a PCI pass-thru device,
the core problem could recur. I.e., not reserving space for the framebuffer
could allow the PCI device to try to use MMIO space that Hyper-V has
set up for the frame buffer, causing the PCI device to fail. And that's a
worse problem than just having the graphics console not function. I
can't actually try the failure case because I don't have an assignable PCI
device on my laptop, but it seems likely based on the evidence that
Hyper-V is setting up a framebuffer device.
So instead of not reserving any MMIO space for the framebuffer on
CVMs, the code you already have limits the reservation to half of the
MMIO space below 4 GB. Won't that work to avoid exhausting the low
MMIO space in a CVM that's running on a local Hyper-V with only 128
MiB of low MMIO space?
> When we assign a physical PCI GPU device to a CVM, I'm not sure if there
> is any framebuffer from the GPU or not. Even if there is, that's a completely
> different scenario and not reserving some low MMIO for "framebuffer"
> is unrelated: I think hyperv_drm (or the deprecated hyperv_fb) is the only
> driver that sets the fb_overlap_ok parameter of vmbus_allocate_mmio().
>
> > And at the moment, CVMs don't
> > support PCI devices,
>
> This is not true: recently I created a "Standard DC16eds v6" TDX CVM
> on Azure, and I did see two NVMe local temporary disks in "nvme list"
> (here TDISP is not used). In 2023, we added the commit
> 2c6ba4216844 ("PCI: hv: Enable PCI pass-thru devices in Confidential VMs")
> and I believe some users are running CVMs with GPUs.
Interesting! I worked on commit 2c6ba4216844, but had not noticed
that Azure now has offerings that makes use of it. I'll take a look at
that TDX VM size.
Thanks,
Michael
^ permalink raw reply [flat|nested] 11+ messages in thread* RE: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
2026-04-29 18:01 ` Michael Kelley
@ 2026-04-30 22:16 ` Dexuan Cui
0 siblings, 0 replies; 11+ messages in thread
From: Dexuan Cui @ 2026-04-30 22:16 UTC (permalink / raw)
To: Michael Kelley, KY Srinivasan, Haiyang Zhang, wei.liu@kernel.org,
Long Li, linux-hyperv@vger.kernel.org,
linux-kernel@vger.kernel.org, matthew.ruffell@canonical.com,
johansen@templeofstupid.com
Cc: stable@vger.kernel.org
> From: Michael Kelley <mhklinux@outlook.com>
> Sent: Wednesday, April 29, 2026 11:01 AM
>
> From: Dexuan Cui <DECUI@microsoft.com> Sent: Tuesday, April 28, 2026 8:13
> PM
> ...
> >
> > A CVM on Hyper-V won't start without the command line
> > Disable-VMConsoleSupport -VMName $vmName
This is not true. It turns out I can start a VBS/SNP/TDX without the
command line.... Sorry! Not sure why I had the wrong impression -- I
guess I was told to always run the command since day 1, so I subconsciously
thought a VM would not start without it. Or, maybe the host behavior
changed? but that seems unlikely to me.
> Unfortunately, on my laptop Hyper-V, a VM with VBS Isolation appears
> to *not* require Disable-VMConsoleSupport. I can start the VM, and the
> VM is offered the VMBus synthvid, mouse, and keyboard devices.
Actually I can also start a VBS VM without Disable-VMConsoleSupport.
> But what's weird in this case is that vmbus_reserved_fb() sees lfb_base
> and lfb_start as 0.
I see the same.
> Furthermore, as a test, I changed the "allowed_in_isolated"
> flag to true for the synthvid device, and the Hyper-V DRM driver loads and
> initializes.
I also changed the flag .allowed_in_isolated to true for HV_SYNTHVID_GUID,
HV_KBD, and HV_MOUSE, but I can't see the devices in "lsvmbus".
In vmbus_onoffer(), I printed the offer->offer.if_type and
offer->offer.if_instance just after the message " Invalid offer %d from the host
supporting isolation", and I indeed don't see the fb/mouse/keyboard devices.
I'm on a recent Hyper-V dev build. Maybe this is why my observation is
not exactly the same.
>In doing so, the vmconnect.exe window is resized larger, as is
> done in a normal VM. /proc/iomem shows that the DRM driver claimed
> the expected MMIO range at the start of low MMIO space. I can run a user
> space program that mmaps /dev/fb0 and writes pixels to the mmap'ed
> memory, and that succeeds as it would in a normal VM, but the
> vmconnect.exe window doesn't show anything. It appears that the Hyper-V
> host has allocated memory for the frame buffer, but is ignoring anything
> that is written to it.
>
> Running Disable-VMConsoleSupport works as expected -- the synthvid,
> mouse, and keyboard devices are no longer offered to the VM.
I even ran "Enable-VMConsoleSupport", which finished without any error,
but I still didn't see the keyboard/mouse/framebuffer devices.
> So instead of not reserving any MMIO space for the framebuffer on
> CVMs, the code you already have limits the reservation to half of the
> MMIO space below 4 GB.
Correct.
> Won't that work to avoid exhausting the low
> MMIO space in a CVM that's running on a local Hyper-V with only 128
> MiB of low MMIO space?
Correct. I'll drop the CVM check in vmbus_reserve_fb() in v2.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
2026-04-16 18:35 [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs Dexuan Cui
` (3 preceding siblings ...)
2026-04-23 17:40 ` Michael Kelley
@ 2026-04-30 16:33 ` kernel test robot
2026-04-30 22:42 ` [EXTERNAL] " Dexuan Cui
4 siblings, 1 reply; 11+ messages in thread
From: kernel test robot @ 2026-04-30 16:33 UTC (permalink / raw)
To: Dexuan Cui, kys, haiyangz, wei.liu, longli, linux-hyperv,
linux-kernel, mhklinux, matthew.ruffell, johansen
Cc: llvm, oe-kbuild-all, stable
Hi Dexuan,
kernel test robot noticed the following build warnings:
[auto build test WARNING on linus/master]
[also build test WARNING on v7.1-rc1 next-20260429]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Dexuan-Cui/Drivers-hv-vmbus-Improve-the-logc-of-reserving-fb_mmio-on-Gen2-VMs/20260424-033622
base: linus/master
patch link: https://lore.kernel.org/r/20260416183529.838321-1-decui%40microsoft.com
patch subject: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
config: i386-buildonly-randconfig-002-20260430 (https://download.01.org/0day-ci/archive/20260501/202605010002.dnnxVZFF-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260501/202605010002.dnnxVZFF-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202605010002.dnnxVZFF-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/hv/vmbus_drv.c:2403:40: warning: result of comparison of constant 4294967296 with expression of type 'resource_size_t' (aka 'unsigned int') is always false [-Wtautological-constant-out-of-range-compare]
2403 | if (!low_mmio_base || low_mmio_base >= SZ_4G ||
| ~~~~~~~~~~~~~ ^ ~~~~~
1 warning generated.
vim +2403 drivers/hv/vmbus_drv.c
2385
2386 static void __maybe_unused vmbus_reserve_fb(void)
2387 {
2388 resource_size_t start = 0, size;
2389 resource_size_t low_mmio_base;
2390 struct pci_dev *pdev;
2391
2392 /* Hyper-V CoCo guests do not have a framebuffer device. */
2393 if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
2394 return;
2395
2396 if (efi_enabled(EFI_BOOT)) {
2397 /* Gen2 VM: get FB base from EFI framebuffer */
2398 if (IS_ENABLED(CONFIG_SYSFB)) {
2399 start = sysfb_primary_display.screen.lfb_base;
2400 size = max_t(__u32, sysfb_primary_display.screen.lfb_size, 0x800000);
2401
2402 low_mmio_base = hyperv_mmio->start;
> 2403 if (!low_mmio_base || low_mmio_base >= SZ_4G ||
2404 (start && start < low_mmio_base)) {
2405 pr_warn("Unexpected low mmio base 0x%pa\n", &low_mmio_base);
2406 } else {
2407 /*
2408 * If the kdump kernel's lfb_base is 0,
2409 * fall back to the low mmio base.
2410 */
2411 if (!start)
2412 start = low_mmio_base;
2413 /*
2414 * Reserve half of the space below 4GB for high
2415 * resolutions, but cap the reservation to 128MB.
2416 */
2417 size = min((SZ_4G - start) / 2, SZ_128M);
2418 }
2419 }
2420 } else {
2421 /* Gen1 VM: get FB base from PCI */
2422 pdev = pci_get_device(PCI_VENDOR_ID_MICROSOFT,
2423 PCI_DEVICE_ID_HYPERV_VIDEO, NULL);
2424 if (!pdev)
2425 return;
2426
2427 if (pdev->resource[0].flags & IORESOURCE_MEM) {
2428 start = pci_resource_start(pdev, 0);
2429 size = pci_resource_len(pdev, 0);
2430 }
2431
2432 /*
2433 * Release the PCI device so hyperv_drm driver can grab it
2434 * later.
2435 */
2436 pci_dev_put(pdev);
2437 }
2438
2439 if (!start)
2440 return;
2441
2442 /*
2443 * Make a claim for the frame buffer in the resource tree under the
2444 * first node, which will be the one below 4GB. The length seems to
2445 * be underreported, particularly in a Generation 1 VM. So start out
2446 * reserving a larger area and make it smaller until it succeeds.
2447 */
2448 for (; !fb_mmio && (size >= 0x100000); size >>= 1)
2449 fb_mmio = __request_region(hyperv_mmio, start, size, fb_mmio_name, 0);
2450
2451 pr_info("hv_mmio=%pR,%pR fb=%pR\n", hyperv_mmio, hyperv_mmio->sibling, fb_mmio);
2452 }
2453
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 11+ messages in thread* RE: [EXTERNAL] Re: [PATCH] Drivers: hv: vmbus: Improve the logc of reserving fb_mmio on Gen2 VMs
2026-04-30 16:33 ` kernel test robot
@ 2026-04-30 22:42 ` Dexuan Cui
0 siblings, 0 replies; 11+ messages in thread
From: Dexuan Cui @ 2026-04-30 22:42 UTC (permalink / raw)
To: kernel test robot, KY Srinivasan, Haiyang Zhang,
wei.liu@kernel.org, Long Li, linux-hyperv@vger.kernel.org,
linux-kernel@vger.kernel.org, mhklinux@outlook.com,
matthew.ruffell@canonical.com, johansen@templeofstupid.com
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev,
stable@vger.kernel.org
> From: kernel test robot <lkp@intel.com>
> Sent: Thursday, April 30, 2026 9:33 AM
> ...
> config: i386-buildonly-randconfig-002-20260430
> ...
> All warnings (new ones prefixed by >>):
>
> >> drivers/hv/vmbus_drv.c:2403:40: warning: result of comparison of constant
> 4294967296 with expression of type 'resource_size_t' (aka 'unsigned int') is
> always false [-Wtautological-constant-out-of-range-compare]
> 2403 | if (!low_mmio_base || low_mmio_base >= SZ_4G ||
> | ~~~~~~~~~~~~~ ^ ~~~~~
> 1 warning generated.
Thanks for reporting the warning with the i386 kernel config.
I don't know if there is any x86-32 users nowadays, but this warning can be
fixed by:
- if (!low_mmio_base || low_mmio_base >= SZ_4G ||
+ if (!low_mmio_base || upper_32_bits(low_mmio_base) ||
(start && start < low_mmio_base)) {
pr_warn("Unexpected low mmio base 0x%pa\n", &low_mmio_base);
}
^ permalink raw reply [flat|nested] 11+ messages in thread