public inbox for linux-hyperv@vger.kernel.org
 help / color / mirror / Atom feed
From: Wei Liu <wei.liu@kernel.org>
To: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, longli@microsoft.com,
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4] mshv: Add support for integrated scheduler
Date: Wed, 4 Feb 2026 07:18:16 +0000	[thread overview]
Message-ID: <20260204071816.GN79272@liuwe-devbox-debian-v2.local> (raw)
In-Reply-To: <177006034399.132128.8748943595417271449.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net>

On Mon, Feb 02, 2026 at 07:26:06PM +0000, Stanislav Kinsburskii wrote:
> Query the hypervisor for integrated scheduler support and use it if
> configured.
> 
> Microsoft Hypervisor originally provided two schedulers: root and core. The

Microsoft Hypervisor provides three schedulers: root, classic
(with or without SMT) and core. The latter two are hypervisor based.

> root scheduler allows the root partition to schedule guest vCPUs across
> physical cores, supporting both time slicing and CPU affinity (e.g., via
> cgroups). In contrast, the core scheduler delegates vCPU-to-physical-core
> scheduling entirely to the hypervisor.
> 
> Direct virtualization introduces a new privileged guest partition type - L1

Level-1 Virtualization Host.

> Virtual Host (L1VH) — which can create child partitions from its own
> resources. These child partitions are effectively siblings, scheduled by
> the hypervisor's core scheduler. This prevents the L1VH parent from setting
> affinity or time slicing for its own processes or guest VPs. While cgroups,
> CFS, and cpuset controllers can still be used, their effectiveness is
> unpredictable, as the core scheduler swaps vCPUs according to its own logic
> (typically round-robin across all allocated physical CPUs). As a result,
> the system may appear to "steal" time from the L1VH and its children.
> 
> To address this, Microsoft Hypervisor introduces the integrated scheduler.
> This allows an L1VH partition to schedule its own vCPUs and those of its
> guests across its "physical" cores, effectively emulating root scheduler
> behavior within the L1VH, while retaining core scheduler behavior for the
> rest of the system.
> 
> The integrated scheduler is controlled by the root partition and gated by
> the vmm_enable_integrated_scheduler capability bit. If set, the hypervisor
> supports the integrated scheduler. The L1VH partition must then check if it
> is enabled by querying the corresponding extended partition property. If
> this property is true, the L1VH partition must use the root scheduler
> logic; otherwise, it must use the core scheduler. This requirement makes
> reading VMM capabilities in L1VH partition a requirement too.
> 
> Signed-off-by: Andreea Pintilie <anpintil@microsoft.com>
> Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
> ---
[...]
> +++ b/include/hyperv/hvhdk_mini.h
> @@ -87,6 +87,9 @@ enum hv_partition_property_code {
>  	HV_PARTITION_PROPERTY_PRIVILEGE_FLAGS			= 0x00010000,
>  	HV_PARTITION_PROPERTY_SYNTHETIC_PROC_FEATURES		= 0x00010001,
>  
> +	/* Integrated scheduling properties */
> +	HV_PARTITION_PROPERTY_INTEGRATED_SCHEDULER_ENABLED	= 0x00020005,

The internal name is "HvPartitionPropertyHierarchicalIntegratedSchedulerEnabled".

You missed the "Hierarchical" part in the property code name.

Wei

> +
>  	/* Resource properties */
>  	HV_PARTITION_PROPERTY_GPA_PAGE_ACCESS_TRACKING		= 0x00050005,
>  	HV_PARTITION_PROPERTY_UNIMPLEMENTED_MSR_ACTION		= 0x00050017,
> @@ -102,7 +105,7 @@ enum hv_partition_property_code {
>  };
>  
>  #define HV_PARTITION_VMM_CAPABILITIES_BANK_COUNT		1
> -#define HV_PARTITION_VMM_CAPABILITIES_RESERVED_BITFIELD_COUNT	59
> +#define HV_PARTITION_VMM_CAPABILITIES_RESERVED_BITFIELD_COUNT	57
>  
>  struct hv_partition_property_vmm_capabilities {
>  	u16 bank_count;
> @@ -119,6 +122,8 @@ struct hv_partition_property_vmm_capabilities {
>  			u64 reservedbit3: 1;
>  #endif
>  			u64 assignable_synthetic_proc_features: 1;
> +			u64 reservedbit5: 1;
> +			u64 vmm_enable_integrated_scheduler : 1;
>  			u64 reserved0: HV_PARTITION_VMM_CAPABILITIES_RESERVED_BITFIELD_COUNT;
>  		} __packed;
>  	};
> 
> 
> 

  reply	other threads:[~2026-02-04  7:18 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-02 19:26 [PATCH v4] mshv: Add support for integrated scheduler Stanislav Kinsburskii
2026-02-04  7:18 ` Wei Liu [this message]
2026-02-18  8:05   ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260204071816.GN79272@liuwe-devbox-debian-v2.local \
    --to=wei.liu@kernel.org \
    --cc=decui@microsoft.com \
    --cc=haiyangz@microsoft.com \
    --cc=kys@microsoft.com \
    --cc=linux-hyperv@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longli@microsoft.com \
    --cc=skinsburskii@linux.microsoft.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox