* [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
@ 2023-03-30 22:43 David Dai
2023-03-30 22:43 ` [RFC PATCH 2/6] kvm: arm64: Add support for get_cur_cpufreq service David Dai
` (7 more replies)
0 siblings, 8 replies; 27+ messages in thread
From: David Dai @ 2023-03-30 22:43 UTC (permalink / raw)
To: Rafael J. Wysocki, Viresh Kumar, Rob Herring, Krzysztof Kozlowski,
Paolo Bonzini, Jonathan Corbet, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, David Dai
Cc: kernel-team, linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
Hi,
This patch series is a continuation of the talk Saravana gave at LPC 2022
titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
of the talk is that workloads running in a guest VM get terrible task
placement and DVFS behavior when compared to running the same workload in
the host. Effectively, no EAS for threads inside VMs. This would make power
and performance terrible just by running the workload in a VM even if we
assume there is zero virtualization overhead.
We have been iterating over different options for communicating between
guest and host, ways of applying the information coming from the
guest/host, etc to figure out the best performance and power improvements
we could get.
The patch series in its current state is NOT meant for landing in the
upstream kernel. We are sending this patch series to share the current
progress and data we have so far. The patch series is meant to be easy to
cherry-pick and test on various devices to see what performance and power
benefits this might give for others.
With this series, a workload running in a VM gets the same task placement
and DVFS treatment as it would when running in the host.
As expected, we see significant performance improvement and better
performance/power ratio. If anyone else wants to try this out for your VM
workloads and report findings, that'd be very much appreciated.
The idea is to improve VM CPUfreq/sched behavior by:
- Having guest kernel to do accurate load tracking by taking host CPU
arch/type and frequency into account.
- Sharing vCPU run queue utilization information with the host so that the
host can do proper frequency scaling and task placement on the host side.
Results:
========
As of right now, the best results have been with using hypercalls (see more
below first) to communicate between host and guest and treating the vCPU
run queue util similar to util_est on the host side vCPU thread. So that's
what this patch series does.
Let's look at the results for this series first and then look at the other
options we are trying/tried out:
Use cases running Android inside a VM on a Chromebook:
======================================================
PCMark (Emulates real world usecases)
Higher is better
+-------------------+----------+------------+--------+
| Test Case (score) | Baseline | Util_guest | %delta |
+-------------------+----------+------------+--------+
| Weighted Total | 6136 | 7274 | +19% |
+-------------------+----------+------------+--------+
| Web Browsing | 5558 | 6273 | +13% |
+-------------------+----------+------------+--------+
| Video Editing | 4921 | 5221 | +6% |
+-------------------+----------+------------+--------+
| Writing | 6864 | 8825 | +29% |
+-------------------+----------+------------+--------+
| Photo Editing | 7983 | 11593 | +45% |
+-------------------+----------+------------+--------+
| Data Manipulation | 5814 | 6081 | +5% |
+-------------------+----------+------------+--------+
PCMark Performance/mAh
Higher is better
+-----------+----------+------------+--------+
| | Baseline | Util_guest | %delta |
+-----------+----------+------------+--------+
| Score/mAh | 79 | 88 | +11% |
+-----------+----------+------------+--------+
Roblox
Higher is better
+-----+----------+------------+--------+
| | Baseline | Util_guest | %delta |
+-----+----------+------------+--------+
| FPS | 18.25 | 28.66 | +57% |
+-----+----------+------------+--------+
Roblox FPS/mAh
Higher is better
+-----+----------+------------+--------+
| | Baseline | Util_guest | %delta |
+-----+----------+------------+--------+
| FPS | 0.15 | 0.19 | +26% |
+-----+----------+------------+--------+
Use cases running a minimal system inside a VM on a Pixel 6:
============================================================
FIO
Higher is better
+----------------------+----------+------------+--------+
| Test Case (avg MB/s) | Baseline | Util_guest | %delta |
+----------------------+----------+------------+--------+
| Seq Write | 9.27 | 12.6 | +36% |
+----------------------+----------+------------+--------+
| Rand Write | 9.34 | 11.9 | +27% |
+----------------------+----------+------------+--------+
| Seq Read | 106 | 124 | +17% |
+----------------------+----------+------------+--------+
| Rand Read | 33.6 | 35 | +4% |
+----------------------+----------+------------+--------+
CPU-based ML Inference Benchmark
Lower is better
+-------------------------+----------+------------+--------+
| Test Case (ms) | Baseline | Util_guest | %delta |
+-------------------------+----------+------------+--------+
| Cached Sample Inference | 2.57 | 1.75 | -32% |
+-------------------------+----------+------------+--------+
| Small Sample Inference | 6.8 | 5.57 | -18% |
+-------------------------+----------+------------+--------+
| Large Sample Inference | 31.2 | 26.58 | -15% |
+-------------------------+----------+------------+--------+
These patches expect the host to:
- Affine vCPUs to specific clusters.
- Set vCPU capacity to match the host CPU they are running on.
To make this easy to do/try out, we have put up patches[4][5] to do this on
CrosVM. Once you pick up those patches, you can use options
"--host-cpu-topology" and "--virt-cpufreq" to achieve the above.
The patch series can be broken into:
Patch 1: Add util_guest as an additional PELT signal for host vCPU threads
Patch 2: Hypercall for guest to get current pCPU's frequency
Patch 3: Send vCPU run queue util to host and apply as util_guest
Patch 4: Query pCPU freq table from guest (we'll move this to DT in the
future)
Patch 5: Virtual cpufreq driver that uses the hypercalls to send util to
host and implement frequency invariance in the guest.
Alternative we have implemented and profiled:
=============================================
util_guest vs uclamp_min
========================
One suggestion at LPC was to use uclamp_min to apply the util info coming
from the guest. As we suspected, it doesn't perform as well because
uclamp_min is not additive, whereas the actual workload on the host CPU due
to the vCPU is additive to the existing workloads on the host. Uclamp_min
also has the undesirable side-effect of threads forked from the vCPU thread
inheriting whatever uclamp_min value the vCPU thread had and then getting
stuck with that uclamp_min value.
Below are some additional benchmark results comparing the uclamp_min
prototype (listed as Uclamp) using the same test environment as before
(including hypercalls).
As before, %delta is always comparing to baseline.
PCMark
Higher is better
+-------------------+----------+------------+--------+--------+--------+
| Test Case (score) | Baseline | Util_guest | %delta | Uclamp | %delta |
+-------------------+----------+------------+--------+--------+--------+
| Weighted Total | 6136 | 7274 | +19% | 6848 | +12% |
+-------------------+----------+------------+--------+--------+--------+
| Web Browsing | 5558 | 6273 | +13% | 6050 | +9% |
+-------------------+----------+------------+--------+--------+--------+
| Video Editing | 4921 | 5221 | +6% | 5091 | +3% |
+-------------------+----------+------------+--------+--------+--------+
| Writing | 6864 | 8825 | +29% | 8523 | +24% |
+-------------------+----------+------------+--------+--------+--------+
| Photo Editing | 7983 | 11593 | +45% | 9865 | +24% |
+-------------------+----------+------------+--------+--------+--------+
| Data Manipulation | 5814 | 6081 | +5% | 5836 | 0% |
+-------------------+----------+------------+--------+--------+--------+
PCMark Performance/mAh
Higher is better
+-----------+----------+------------+--------+--------+--------+
| | Baseline | Util_guest | %delta | Uclamp | %delta |
+-----------+----------+------------+--------+--------+--------+
| Score/mAh | 79 | 88 | +11% | 83 | +7% |
+-----------+----------+------------+--------+--------+--------+
Hypercalls vs MMIO:
===================
We realize that hypercalls are not the recommended choice for this and we
have no attachment to any communication method as long as it gives good
results.
We started off with hypercalls to see what is the best we could achieve if
we didn't have to context switch into host side userspace.
To see the impact of switching from hypercalls to MMIO, we kept util_guest
and only switched from hypercall to MMIO. So in the results below:
- Hypercall = hypercall + util_guest
- MMIO = MMIO + util_guest
As before, %delta is always comparing to baseline.
PCMark
Higher is better
+-------------------+----------+------------+--------+-------+--------+
| Test Case (score) | Baseline | Hypercall | %delta | MMIO | %delta |
+-------------------+----------+------------+--------+-------+--------+
| Weighted Total | 6136 | 7274 | +19% | 6867 | +12% |
+-------------------+----------+------------+--------+-------+--------+
| Web Browsing | 5558 | 6273 | +13% | 6035 | +9% |
+-------------------+----------+------------+--------+-------+--------+
| Video Editing | 4921 | 5221 | +6% | 5167 | +5% |
+-------------------+----------+------------+--------+-------+--------+
| Writing | 6864 | 8825 | +29% | 8529 | +24% |
+-------------------+----------+------------+--------+-------+--------+
| Photo Editing | 7983 | 11593 | +45% | 10812 | +35% |
+-------------------+----------+------------+--------+-------+--------+
| Data Manipulation | 5814 | 6081 | +5% | 5327 | -8% |
+-------------------+----------+------------+--------+-------+--------+
PCMark Performance/mAh
Higher is better
+-----------+----------+-----------+--------+------+--------+
| | Baseline | Hypercall | %delta | MMIO | %delta |
+-----------+----------+-----------+--------+------+--------+
| Score/mAh | 79 | 88 | +11% | 83 | +7% |
+-----------+----------+-----------+--------+------+--------+
Roblox
Higher is better
+-----+----------+------------+--------+-------+--------+
| | Baseline | Hypercall | %delta | MMIO | %delta |
+-----+----------+------------+--------+-------+--------+
| FPS | 18.25 | 28.66 | +57% | 24.06 | +32% |
+-----+----------+------------+--------+-------+--------+
Roblox Frames/mAh
Higher is better
+------------+----------+------------+--------+--------+--------+
| | Baseline | Hypercall | %delta | MMIO | %delta |
+------------+----------+------------+--------+--------+--------+
| Frames/mAh | 91.25 | 114.64 | +26% | 103.11 | +13% |
+------------+----------+------------+--------+--------+--------+
Next steps:
===========
We are continuing to look into communication mechanisms other than
hypercalls that are just as/more efficient and avoid switching into the VMM
userspace. Any inputs in this regard are greatly appreciated.
Thanks,
David & Saravana
[1] - https://lpc.events/event/16/contributions/1195/
[2] - https://lpc.events/event/16/contributions/1195/attachments/970/1893/LPC%202022%20-%20VM%20DVFS.pdf
[3] - https://www.youtube.com/watch?v=hIg_5bg6opU
[4] - https://chromium-review.googlesource.com/c/crosvm/crosvm/+/4208668
[5] - https://chromium-review.googlesource.com/c/crosvm/crosvm/+/4288027
David Dai (6):
sched/fair: Add util_guest for tasks
kvm: arm64: Add support for get_cur_cpufreq service
kvm: arm64: Add support for util_hint service
kvm: arm64: Add support for get_freqtbl service
dt-bindings: cpufreq: add bindings for virtual kvm cpufreq
cpufreq: add kvm-cpufreq driver
.../bindings/cpufreq/cpufreq-virtual-kvm.yaml | 39 +++
Documentation/virt/kvm/api.rst | 28 ++
.../virt/kvm/arm/get_cur_cpufreq.rst | 21 ++
Documentation/virt/kvm/arm/get_freqtbl.rst | 23 ++
Documentation/virt/kvm/arm/index.rst | 3 +
Documentation/virt/kvm/arm/util_hint.rst | 22 ++
arch/arm64/include/uapi/asm/kvm.h | 3 +
arch/arm64/kvm/arm.c | 3 +
arch/arm64/kvm/hypercalls.c | 60 +++++
drivers/cpufreq/Kconfig | 13 +
drivers/cpufreq/Makefile | 1 +
drivers/cpufreq/kvm-cpufreq.c | 245 ++++++++++++++++++
include/linux/arm-smccc.h | 21 ++
include/linux/sched.h | 12 +
include/uapi/linux/kvm.h | 3 +
kernel/sched/core.c | 24 +-
kernel/sched/fair.c | 15 +-
tools/arch/arm64/include/uapi/asm/kvm.h | 3 +
18 files changed, 536 insertions(+), 3 deletions(-)
create mode 100644 Documentation/devicetree/bindings/cpufreq/cpufreq-virtual-kvm.yaml
create mode 100644 Documentation/virt/kvm/arm/get_cur_cpufreq.rst
create mode 100644 Documentation/virt/kvm/arm/get_freqtbl.rst
create mode 100644 Documentation/virt/kvm/arm/util_hint.rst
create mode 100644 drivers/cpufreq/kvm-cpufreq.c
--
2.40.0.348.gf938b09366-goog
^ permalink raw reply [flat|nested] 27+ messages in thread
* [RFC PATCH 2/6] kvm: arm64: Add support for get_cur_cpufreq service
2023-03-30 22:43 [RFC PATCH 0/6] Improve VM DVFS and task placement behavior David Dai
@ 2023-03-30 22:43 ` David Dai
2023-04-05 8:04 ` Quentin Perret
2023-03-30 22:43 ` [RFC PATCH 3/6] kvm: arm64: Add support for util_hint service David Dai
` (6 subsequent siblings)
7 siblings, 1 reply; 27+ messages in thread
From: David Dai @ 2023-03-30 22:43 UTC (permalink / raw)
To: Paolo Bonzini, Jonathan Corbet, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla
Cc: David Dai, Saravana Kannan, kernel-team, kvm, linux-doc,
linux-kernel, linux-arm-kernel, kvmarm
This service allows guests to query the host for frequency of the CPU
that the vCPU is currently running on.
Co-developed-by: Saravana Kannan <saravanak@google.com>
Signed-off-by: Saravana Kannan <saravanak@google.com>
Signed-off-by: David Dai <davidai@google.com>
---
Documentation/virt/kvm/api.rst | 8 +++++++
.../virt/kvm/arm/get_cur_cpufreq.rst | 21 +++++++++++++++++++
Documentation/virt/kvm/arm/index.rst | 1 +
arch/arm64/include/uapi/asm/kvm.h | 1 +
arch/arm64/kvm/arm.c | 1 +
arch/arm64/kvm/hypercalls.c | 18 ++++++++++++++++
include/linux/arm-smccc.h | 7 +++++++
include/uapi/linux/kvm.h | 1 +
tools/arch/arm64/include/uapi/asm/kvm.h | 1 +
9 files changed, 59 insertions(+)
create mode 100644 Documentation/virt/kvm/arm/get_cur_cpufreq.rst
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 62de0768d6aa..b0ff0ad700bf 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -8380,6 +8380,14 @@ structure.
When getting the Modified Change Topology Report value, the attr->addr
must point to a byte where the value will be stored or retrieved from.
+8.40 KVM_CAP_GET_CUR_CPUFREQ
+------------------------
+
+:Architectures: arm64
+
+This capability indicates that KVM supports getting the
+frequency of the current CPU that the vCPU thread is running on.
+
9. Known KVM API problems
=========================
diff --git a/Documentation/virt/kvm/arm/get_cur_cpufreq.rst b/Documentation/virt/kvm/arm/get_cur_cpufreq.rst
new file mode 100644
index 000000000000..06e0ed5b3868
--- /dev/null
+++ b/Documentation/virt/kvm/arm/get_cur_cpufreq.rst
@@ -0,0 +1,21 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+get_cur_cpufreq support for arm/arm64
+=============================
+
+Get_cur_cpufreq support is used to get current frequency(in KHz) of the
+current CPU that the vCPU thread is running on.
+
+* ARM_SMCCC_VENDOR_HYP_KVM_GET_CUR_CPUFREQ_FUNC_ID: 0x86000040
+
+This hypercall uses the SMC32/HVC32 calling convention:
+
+ARM_SMCCC_VENDOR_HYP_KVM_GET_CUR_CPUFREQ_FUNC_ID
+ ============== ======== =====================================
+ Function ID: (uint32) 0x86000040
+ Return Values: (int32) NOT_SUPPORTED(-1) on error, or
+ (uint32) Frequency in KHz of current CPU that the
+ vCPU thread is running on.
+ Endianness: Must be the same endianness
+ as the host.
+ ============== ======== =====================================
diff --git a/Documentation/virt/kvm/arm/index.rst b/Documentation/virt/kvm/arm/index.rst
index e84848432158..47afc5c1f24a 100644
--- a/Documentation/virt/kvm/arm/index.rst
+++ b/Documentation/virt/kvm/arm/index.rst
@@ -11,3 +11,4 @@ ARM
hypercalls
pvtime
ptp_kvm
+ get_cur_cpufreq
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index f8129c624b07..ed8b63e91bdc 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -367,6 +367,7 @@ enum {
enum {
KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT = 0,
KVM_REG_ARM_VENDOR_HYP_BIT_PTP = 1,
+ KVM_REG_ARM_VENDOR_HYP_BIT_GET_CUR_CPUFREQ = 2,
#ifdef __KERNEL__
KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT,
#endif
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 3bd732eaf087..f960b136c611 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -220,6 +220,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_VCPU_ATTRIBUTES:
case KVM_CAP_PTP_KVM:
case KVM_CAP_ARM_SYSTEM_SUSPEND:
+ case KVM_CAP_GET_CUR_CPUFREQ:
r = 1;
break;
case KVM_CAP_SET_GUEST_DEBUG2:
diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c
index 5da884e11337..b3f4b90c024b 100644
--- a/arch/arm64/kvm/hypercalls.c
+++ b/arch/arm64/kvm/hypercalls.c
@@ -3,6 +3,9 @@
#include <linux/arm-smccc.h>
#include <linux/kvm_host.h>
+#include <linux/cpufreq.h>
+#include <linux/sched.h>
+#include <uapi/linux/sched/types.h>
#include <asm/kvm_emulate.h>
@@ -16,6 +19,15 @@
#define KVM_ARM_SMCCC_VENDOR_HYP_FEATURES \
GENMASK(KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT - 1, 0)
+static void kvm_sched_get_cur_cpufreq(struct kvm_vcpu *vcpu, u64 *val)
+{
+ unsigned long ret_freq;
+
+ ret_freq = cpufreq_get(task_cpu(current));
+
+ val[0] = ret_freq;
+}
+
static void kvm_ptp_get_time(struct kvm_vcpu *vcpu, u64 *val)
{
struct system_time_snapshot systime_snapshot;
@@ -116,6 +128,9 @@ static bool kvm_hvc_call_allowed(struct kvm_vcpu *vcpu, u32 func_id)
case ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID:
return test_bit(KVM_REG_ARM_VENDOR_HYP_BIT_PTP,
&smccc_feat->vendor_hyp_bmap);
+ case ARM_SMCCC_VENDOR_HYP_KVM_GET_CUR_CPUFREQ_FUNC_ID:
+ return test_bit(KVM_REG_ARM_VENDOR_HYP_BIT_GET_CUR_CPUFREQ,
+ &smccc_feat->vendor_hyp_bmap);
default:
return kvm_hvc_call_default_allowed(func_id);
}
@@ -213,6 +228,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
case ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID:
kvm_ptp_get_time(vcpu, val);
break;
+ case ARM_SMCCC_VENDOR_HYP_KVM_GET_CUR_CPUFREQ_FUNC_ID:
+ kvm_sched_get_cur_cpufreq(vcpu, val);
+ break;
case ARM_SMCCC_TRNG_VERSION:
case ARM_SMCCC_TRNG_FEATURES:
case ARM_SMCCC_TRNG_GET_UUID:
diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index 220c8c60e021..e15f1bdcf3f1 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -112,6 +112,7 @@
/* KVM "vendor specific" services */
#define ARM_SMCCC_KVM_FUNC_FEATURES 0
#define ARM_SMCCC_KVM_FUNC_PTP 1
+#define ARM_SMCCC_KVM_FUNC_GET_CUR_CPUFREQ 64
#define ARM_SMCCC_KVM_FUNC_FEATURES_2 127
#define ARM_SMCCC_KVM_NUM_FUNCS 128
@@ -138,6 +139,12 @@
#define KVM_PTP_VIRT_COUNTER 0
#define KVM_PTP_PHYS_COUNTER 1
+#define ARM_SMCCC_VENDOR_HYP_KVM_GET_CUR_CPUFREQ_FUNC_ID \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_32, \
+ ARM_SMCCC_OWNER_VENDOR_HYP, \
+ ARM_SMCCC_KVM_FUNC_GET_CUR_CPUFREQ)
+
/* Paravirtualised time calls (defined by ARM DEN0057A) */
#define ARM_SMCCC_HV_PV_TIME_FEATURES \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index d77aef872a0a..0a1a260243bf 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1184,6 +1184,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_S390_PROTECTED_ASYNC_DISABLE 224
#define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225
#define KVM_CAP_PMU_EVENT_MASKED_EVENTS 226
+#define KVM_CAP_GET_CUR_CPUFREQ 512
#ifdef KVM_CAP_IRQ_ROUTING
diff --git a/tools/arch/arm64/include/uapi/asm/kvm.h b/tools/arch/arm64/include/uapi/asm/kvm.h
index f8129c624b07..ed8b63e91bdc 100644
--- a/tools/arch/arm64/include/uapi/asm/kvm.h
+++ b/tools/arch/arm64/include/uapi/asm/kvm.h
@@ -367,6 +367,7 @@ enum {
enum {
KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT = 0,
KVM_REG_ARM_VENDOR_HYP_BIT_PTP = 1,
+ KVM_REG_ARM_VENDOR_HYP_BIT_GET_CUR_CPUFREQ = 2,
#ifdef __KERNEL__
KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT,
#endif
--
2.40.0.348.gf938b09366-goog
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [RFC PATCH 3/6] kvm: arm64: Add support for util_hint service
2023-03-30 22:43 [RFC PATCH 0/6] Improve VM DVFS and task placement behavior David Dai
2023-03-30 22:43 ` [RFC PATCH 2/6] kvm: arm64: Add support for get_cur_cpufreq service David Dai
@ 2023-03-30 22:43 ` David Dai
2023-03-30 22:43 ` [RFC PATCH 4/6] kvm: arm64: Add support for get_freqtbl service David Dai
` (5 subsequent siblings)
7 siblings, 0 replies; 27+ messages in thread
From: David Dai @ 2023-03-30 22:43 UTC (permalink / raw)
To: Paolo Bonzini, Jonathan Corbet, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla
Cc: David Dai, Saravana Kannan, kernel-team, kvm, linux-doc,
linux-kernel, linux-arm-kernel, kvmarm
This service allows guests to send the utilization of workoads on its vCPUs
to the host. Utilization is represented as an arbitrary value of 0-1024
where 1024 represents the highest performance point normalized for
frequency and architecture across all CPUs. This hint is used by
the host for scheduling vCPU threads and deciding CPU frequency.
Co-developed-by: Saravana Kannan <saravanak@google.com>
Signed-off-by: Saravana Kannan <saravanak@google.com>
Signed-off-by: David Dai <davidai@google.com>
---
Documentation/virt/kvm/api.rst | 12 ++++++++++++
Documentation/virt/kvm/arm/index.rst | 1 +
Documentation/virt/kvm/arm/util_hint.rst | 22 ++++++++++++++++++++++
arch/arm64/include/uapi/asm/kvm.h | 1 +
arch/arm64/kvm/arm.c | 1 +
arch/arm64/kvm/hypercalls.c | 20 ++++++++++++++++++++
include/linux/arm-smccc.h | 7 +++++++
include/uapi/linux/kvm.h | 1 +
tools/arch/arm64/include/uapi/asm/kvm.h | 1 +
9 files changed, 66 insertions(+)
create mode 100644 Documentation/virt/kvm/arm/util_hint.rst
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index b0ff0ad700bf..38ce33564efc 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -8388,6 +8388,18 @@ must point to a byte where the value will be stored or retrieved from.
This capability indicates that KVM supports getting the
frequency of the current CPU that the vCPU thread is running on.
+8.41 KVM_CAP_UTIL_HINT
+----------------------
+
+:Architectures: arm64
+
+This capability indicates that the KVM supports taking utilization
+hints from the guest. Utilization is represented as a value from 0-1024
+where 1024 represents the highest performance point across all physical CPUs
+after normalizing for architecture. This is useful when guests are tracking
+workload on its vCPUs. Util hints allow the host to make more accurate
+frequency selections and task placement for vCPU threads.
+
9. Known KVM API problems
=========================
diff --git a/Documentation/virt/kvm/arm/index.rst b/Documentation/virt/kvm/arm/index.rst
index 47afc5c1f24a..f83877663813 100644
--- a/Documentation/virt/kvm/arm/index.rst
+++ b/Documentation/virt/kvm/arm/index.rst
@@ -12,3 +12,4 @@ ARM
pvtime
ptp_kvm
get_cur_cpufreq
+ util_hint
diff --git a/Documentation/virt/kvm/arm/util_hint.rst b/Documentation/virt/kvm/arm/util_hint.rst
new file mode 100644
index 000000000000..262d142d62d9
--- /dev/null
+++ b/Documentation/virt/kvm/arm/util_hint.rst
@@ -0,0 +1,22 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Util_hint support for arm64
+============================
+
+Util_hint is used for sharing the utilization value from the guest
+to the host.
+
+* ARM_SMCCC_HYP_KVM_UTIL_HINT_FUNC_ID: 0x86000041
+
+This hypercall using the SMC32/HVC32 calling convention:
+
+ARM_SMCCC_HYP_KVM_UTIL_HINT_FUNC_ID
+ ============== ========= ============================
+ Function ID: (uint32) 0x86000041
+ Arguments: (uint32) util value(0-1024) where 1024 represents
+ the highest performance point normalized
+ across all CPUs
+ Return values: (int32) NOT_SUPPORTED(-1) on error.
+ Endianness: Must be the same endianness
+ as the host.
+ ============== ======== ============================
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index ed8b63e91bdc..61309ecb7241 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -368,6 +368,7 @@ enum {
KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT = 0,
KVM_REG_ARM_VENDOR_HYP_BIT_PTP = 1,
KVM_REG_ARM_VENDOR_HYP_BIT_GET_CUR_CPUFREQ = 2,
+ KVM_REG_ARM_VENDOR_HYP_BIT_UTIL_HINT = 3,
#ifdef __KERNEL__
KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT,
#endif
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index f960b136c611..bf3c4d4b9b67 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -221,6 +221,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_PTP_KVM:
case KVM_CAP_ARM_SYSTEM_SUSPEND:
case KVM_CAP_GET_CUR_CPUFREQ:
+ case KVM_CAP_UTIL_HINT:
r = 1;
break;
case KVM_CAP_SET_GUEST_DEBUG2:
diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c
index b3f4b90c024b..01dba07b5183 100644
--- a/arch/arm64/kvm/hypercalls.c
+++ b/arch/arm64/kvm/hypercalls.c
@@ -28,6 +28,20 @@ static void kvm_sched_get_cur_cpufreq(struct kvm_vcpu *vcpu, u64 *val)
val[0] = ret_freq;
}
+static void kvm_sched_set_util(struct kvm_vcpu *vcpu, u64 *val)
+{
+ struct sched_attr attr = {
+ .sched_flags = SCHED_FLAG_UTIL_GUEST,
+ };
+ int ret;
+
+ attr.sched_util_min = smccc_get_arg1(vcpu);
+
+ ret = sched_setattr_nocheck(current, &attr);
+
+ val[0] = (u64)ret;
+}
+
static void kvm_ptp_get_time(struct kvm_vcpu *vcpu, u64 *val)
{
struct system_time_snapshot systime_snapshot;
@@ -131,6 +145,9 @@ static bool kvm_hvc_call_allowed(struct kvm_vcpu *vcpu, u32 func_id)
case ARM_SMCCC_VENDOR_HYP_KVM_GET_CUR_CPUFREQ_FUNC_ID:
return test_bit(KVM_REG_ARM_VENDOR_HYP_BIT_GET_CUR_CPUFREQ,
&smccc_feat->vendor_hyp_bmap);
+ case ARM_SMCCC_VENDOR_HYP_KVM_UTIL_HINT_FUNC_ID:
+ return test_bit(KVM_REG_ARM_VENDOR_HYP_BIT_UTIL_HINT,
+ &smccc_feat->vendor_hyp_bmap);
default:
return kvm_hvc_call_default_allowed(func_id);
}
@@ -231,6 +248,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
case ARM_SMCCC_VENDOR_HYP_KVM_GET_CUR_CPUFREQ_FUNC_ID:
kvm_sched_get_cur_cpufreq(vcpu, val);
break;
+ case ARM_SMCCC_VENDOR_HYP_KVM_UTIL_HINT_FUNC_ID:
+ kvm_sched_set_util(vcpu, val);
+ break;
case ARM_SMCCC_TRNG_VERSION:
case ARM_SMCCC_TRNG_FEATURES:
case ARM_SMCCC_TRNG_GET_UUID:
diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index e15f1bdcf3f1..9f747e5025b6 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -113,6 +113,7 @@
#define ARM_SMCCC_KVM_FUNC_FEATURES 0
#define ARM_SMCCC_KVM_FUNC_PTP 1
#define ARM_SMCCC_KVM_FUNC_GET_CUR_CPUFREQ 64
+#define ARM_SMCCC_KVM_FUNC_UTIL_HINT 65
#define ARM_SMCCC_KVM_FUNC_FEATURES_2 127
#define ARM_SMCCC_KVM_NUM_FUNCS 128
@@ -145,6 +146,12 @@
ARM_SMCCC_OWNER_VENDOR_HYP, \
ARM_SMCCC_KVM_FUNC_GET_CUR_CPUFREQ)
+#define ARM_SMCCC_VENDOR_HYP_KVM_UTIL_HINT_FUNC_ID \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_32, \
+ ARM_SMCCC_OWNER_VENDOR_HYP, \
+ ARM_SMCCC_KVM_FUNC_UTIL_HINT)
+
/* Paravirtualised time calls (defined by ARM DEN0057A) */
#define ARM_SMCCC_HV_PV_TIME_FEATURES \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 0a1a260243bf..7f667ab344ae 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1185,6 +1185,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225
#define KVM_CAP_PMU_EVENT_MASKED_EVENTS 226
#define KVM_CAP_GET_CUR_CPUFREQ 512
+#define KVM_CAP_UTIL_HINT 513
#ifdef KVM_CAP_IRQ_ROUTING
diff --git a/tools/arch/arm64/include/uapi/asm/kvm.h b/tools/arch/arm64/include/uapi/asm/kvm.h
index ed8b63e91bdc..61309ecb7241 100644
--- a/tools/arch/arm64/include/uapi/asm/kvm.h
+++ b/tools/arch/arm64/include/uapi/asm/kvm.h
@@ -368,6 +368,7 @@ enum {
KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT = 0,
KVM_REG_ARM_VENDOR_HYP_BIT_PTP = 1,
KVM_REG_ARM_VENDOR_HYP_BIT_GET_CUR_CPUFREQ = 2,
+ KVM_REG_ARM_VENDOR_HYP_BIT_UTIL_HINT = 3,
#ifdef __KERNEL__
KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT,
#endif
--
2.40.0.348.gf938b09366-goog
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [RFC PATCH 4/6] kvm: arm64: Add support for get_freqtbl service
2023-03-30 22:43 [RFC PATCH 0/6] Improve VM DVFS and task placement behavior David Dai
2023-03-30 22:43 ` [RFC PATCH 2/6] kvm: arm64: Add support for get_cur_cpufreq service David Dai
2023-03-30 22:43 ` [RFC PATCH 3/6] kvm: arm64: Add support for util_hint service David Dai
@ 2023-03-30 22:43 ` David Dai
2023-03-30 23:20 ` [RFC PATCH 0/6] Improve VM DVFS and task placement behavior Oliver Upton
` (4 subsequent siblings)
7 siblings, 0 replies; 27+ messages in thread
From: David Dai @ 2023-03-30 22:43 UTC (permalink / raw)
To: Paolo Bonzini, Jonathan Corbet, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla
Cc: David Dai, Saravana Kannan, kernel-team, kvm, linux-doc,
linux-kernel, linux-arm-kernel, kvmarm
This service allows guests to query the host for the frequency table
of the CPU that the vCPU is currently running on.
Co-developed-by: Saravana Kannan <saravanak@google.com>
Signed-off-by: Saravana Kannan <saravanak@google.com>
Signed-off-by: David Dai <davidai@google.com>
---
Documentation/virt/kvm/api.rst | 8 ++++++++
Documentation/virt/kvm/arm/get_freqtbl.rst | 23 ++++++++++++++++++++++
Documentation/virt/kvm/arm/index.rst | 1 +
arch/arm64/include/uapi/asm/kvm.h | 1 +
arch/arm64/kvm/arm.c | 1 +
arch/arm64/kvm/hypercalls.c | 22 +++++++++++++++++++++
include/linux/arm-smccc.h | 7 +++++++
include/uapi/linux/kvm.h | 1 +
tools/arch/arm64/include/uapi/asm/kvm.h | 1 +
9 files changed, 65 insertions(+)
create mode 100644 Documentation/virt/kvm/arm/get_freqtbl.rst
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 38ce33564efc..8f905456e2b4 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -8400,6 +8400,14 @@ after normalizing for architecture. This is useful when guests are tracking
workload on its vCPUs. Util hints allow the host to make more accurate
frequency selections and task placement for vCPU threads.
+8.42 KVM_CAP_GET_CPUFREQ_TBL
+---------------------------
+
+:Architectures: arm64
+
+This capability indicates that the KVM supports getting the
+frequency table of the current CPU that the vCPU thread is running on.
+
9. Known KVM API problems
=========================
diff --git a/Documentation/virt/kvm/arm/get_freqtbl.rst b/Documentation/virt/kvm/arm/get_freqtbl.rst
new file mode 100644
index 000000000000..f6832d7566e7
--- /dev/null
+++ b/Documentation/virt/kvm/arm/get_freqtbl.rst
@@ -0,0 +1,23 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+get_freqtbl support for arm/arm64
+=============================
+
+Allows guest to query the frequency(in KHz) table of the current CPU that
+the vCPU thread is running on.
+
+* ARM_SMCCC_VENDOR_HYP_KVM_GET_CPUFREQ_TBL_FUNC_ID: 0x86000042
+
+This hypercall uses the SMC32/HVC32 calling convention:
+
+ARM_SMCCC_VENDOR_HYP_KVM_GET_CPUFREQ_TBL_FUNC_ID
+ ============== ======== =====================================
+ Function ID: (uint32) 0x86000042
+ Arguments: (uint32) index of the current CPU's frequency table
+ Return Values: (int32) NOT_SUPPORTED(-1) on error, or
+ (uint32) Frequency table entry of requested index
+ in KHz
+ of current CPU(r1)
+ Endianness: Must be the same endianness
+ as the host.
+ ============== ======== =====================================
diff --git a/Documentation/virt/kvm/arm/index.rst b/Documentation/virt/kvm/arm/index.rst
index f83877663813..e2e56bb41491 100644
--- a/Documentation/virt/kvm/arm/index.rst
+++ b/Documentation/virt/kvm/arm/index.rst
@@ -13,3 +13,4 @@ ARM
ptp_kvm
get_cur_cpufreq
util_hint
+ get_freqtbl
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 61309ecb7241..ed6f593264bd 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -369,6 +369,7 @@ enum {
KVM_REG_ARM_VENDOR_HYP_BIT_PTP = 1,
KVM_REG_ARM_VENDOR_HYP_BIT_GET_CUR_CPUFREQ = 2,
KVM_REG_ARM_VENDOR_HYP_BIT_UTIL_HINT = 3,
+ KVM_REG_ARM_VENDOR_HYP_BIT_GET_CPUFREQ_TBL = 4,
#ifdef __KERNEL__
KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT,
#endif
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index bf3c4d4b9b67..cd76128e4af4 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -222,6 +222,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_SYSTEM_SUSPEND:
case KVM_CAP_GET_CUR_CPUFREQ:
case KVM_CAP_UTIL_HINT:
+ case KVM_CAP_GET_CPUFREQ_TBL:
r = 1;
break;
case KVM_CAP_SET_GUEST_DEBUG2:
diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c
index 01dba07b5183..6f96579dda80 100644
--- a/arch/arm64/kvm/hypercalls.c
+++ b/arch/arm64/kvm/hypercalls.c
@@ -42,6 +42,22 @@ static void kvm_sched_set_util(struct kvm_vcpu *vcpu, u64 *val)
val[0] = (u64)ret;
}
+static void kvm_sched_get_cpufreq_table(struct kvm_vcpu *vcpu, u64 *val)
+{
+ struct cpufreq_policy *policy;
+ u32 idx = smccc_get_arg1(vcpu);
+
+ policy = cpufreq_cpu_get(task_cpu(current));
+
+ if (!policy)
+ return;
+
+ val[0] = SMCCC_RET_SUCCESS;
+ val[1] = policy->freq_table[idx].frequency;
+
+ cpufreq_cpu_put(policy);
+}
+
static void kvm_ptp_get_time(struct kvm_vcpu *vcpu, u64 *val)
{
struct system_time_snapshot systime_snapshot;
@@ -148,6 +164,9 @@ static bool kvm_hvc_call_allowed(struct kvm_vcpu *vcpu, u32 func_id)
case ARM_SMCCC_VENDOR_HYP_KVM_UTIL_HINT_FUNC_ID:
return test_bit(KVM_REG_ARM_VENDOR_HYP_BIT_UTIL_HINT,
&smccc_feat->vendor_hyp_bmap);
+ case ARM_SMCCC_VENDOR_HYP_KVM_GET_CPUFREQ_TBL_FUNC_ID:
+ return test_bit(KVM_REG_ARM_VENDOR_HYP_BIT_GET_CPUFREQ_TBL,
+ &smccc_feat->vendor_hyp_bmap);
default:
return kvm_hvc_call_default_allowed(func_id);
}
@@ -251,6 +270,9 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
case ARM_SMCCC_VENDOR_HYP_KVM_UTIL_HINT_FUNC_ID:
kvm_sched_set_util(vcpu, val);
break;
+ case ARM_SMCCC_VENDOR_HYP_KVM_GET_CPUFREQ_TBL_FUNC_ID:
+ kvm_sched_get_cpufreq_table(vcpu, val);
+ break;
case ARM_SMCCC_TRNG_VERSION:
case ARM_SMCCC_TRNG_FEATURES:
case ARM_SMCCC_TRNG_GET_UUID:
diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
index 9f747e5025b6..19fefb73a9bd 100644
--- a/include/linux/arm-smccc.h
+++ b/include/linux/arm-smccc.h
@@ -114,6 +114,7 @@
#define ARM_SMCCC_KVM_FUNC_PTP 1
#define ARM_SMCCC_KVM_FUNC_GET_CUR_CPUFREQ 64
#define ARM_SMCCC_KVM_FUNC_UTIL_HINT 65
+#define ARM_SMCCC_KVM_FUNC_GET_CPUFREQ_TBL 66
#define ARM_SMCCC_KVM_FUNC_FEATURES_2 127
#define ARM_SMCCC_KVM_NUM_FUNCS 128
@@ -152,6 +153,12 @@
ARM_SMCCC_OWNER_VENDOR_HYP, \
ARM_SMCCC_KVM_FUNC_UTIL_HINT)
+#define ARM_SMCCC_VENDOR_HYP_KVM_GET_CPUFREQ_TBL_FUNC_ID \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_32, \
+ ARM_SMCCC_OWNER_VENDOR_HYP, \
+ ARM_SMCCC_KVM_FUNC_GET_CPUFREQ_TBL)
+
/* Paravirtualised time calls (defined by ARM DEN0057A) */
#define ARM_SMCCC_HV_PV_TIME_FEATURES \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 7f667ab344ae..90a7f37f046d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1186,6 +1186,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_PMU_EVENT_MASKED_EVENTS 226
#define KVM_CAP_GET_CUR_CPUFREQ 512
#define KVM_CAP_UTIL_HINT 513
+#define KVM_CAP_GET_CPUFREQ_TBL 514
#ifdef KVM_CAP_IRQ_ROUTING
diff --git a/tools/arch/arm64/include/uapi/asm/kvm.h b/tools/arch/arm64/include/uapi/asm/kvm.h
index 61309ecb7241..ebf9a3395c1b 100644
--- a/tools/arch/arm64/include/uapi/asm/kvm.h
+++ b/tools/arch/arm64/include/uapi/asm/kvm.h
@@ -369,6 +369,7 @@ enum {
KVM_REG_ARM_VENDOR_HYP_BIT_PTP = 1,
KVM_REG_ARM_VENDOR_HYP_BIT_GET_CUR_CPUFREQ = 2,
KVM_REG_ARM_VENDOR_HYP_BIT_UTIL_HINT = 3,
+ KVM_REG_ARM_VENDOR_HYP_BIT_CPUFREQ_TBL = 4,
#ifdef __KERNEL__
KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT,
#endif
--
2.40.0.348.gf938b09366-goog
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-03-30 22:43 [RFC PATCH 0/6] Improve VM DVFS and task placement behavior David Dai
` (2 preceding siblings ...)
2023-03-30 22:43 ` [RFC PATCH 4/6] kvm: arm64: Add support for get_freqtbl service David Dai
@ 2023-03-30 23:20 ` Oliver Upton
2023-03-30 23:36 ` Saravana Kannan
2023-03-31 0:49 ` Matthew Wilcox
` (3 subsequent siblings)
7 siblings, 1 reply; 27+ messages in thread
From: Oliver Upton @ 2023-03-30 23:20 UTC (permalink / raw)
To: David Dai
Cc: Rafael J. Wysocki, Viresh Kumar, Rob Herring, Krzysztof Kozlowski,
Paolo Bonzini, Jonathan Corbet, Marc Zyngier, James Morse,
Suzuki K Poulose, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Rutland, Lorenzo Pieralisi, Sudeep Holla, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
[...]
> David Dai (6):
> sched/fair: Add util_guest for tasks
> kvm: arm64: Add support for get_cur_cpufreq service
> kvm: arm64: Add support for util_hint service
> kvm: arm64: Add support for get_freqtbl service
> dt-bindings: cpufreq: add bindings for virtual kvm cpufreq
> cpufreq: add kvm-cpufreq driver
I only received patches 2-4 in my inbox (same goes for the mailing lists
AFAICT). Mind sending the rest? :)
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-03-30 23:20 ` [RFC PATCH 0/6] Improve VM DVFS and task placement behavior Oliver Upton
@ 2023-03-30 23:36 ` Saravana Kannan
2023-03-30 23:40 ` Oliver Upton
0 siblings, 1 reply; 27+ messages in thread
From: Saravana Kannan @ 2023-03-30 23:36 UTC (permalink / raw)
To: Oliver Upton
Cc: David Dai, Rafael J. Wysocki, Viresh Kumar, Rob Herring,
Krzysztof Kozlowski, Paolo Bonzini, Jonathan Corbet, Marc Zyngier,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Thu, Mar 30, 2023 at 4:20 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
>
> [...]
>
> > David Dai (6):
> > sched/fair: Add util_guest for tasks
> > kvm: arm64: Add support for get_cur_cpufreq service
> > kvm: arm64: Add support for util_hint service
> > kvm: arm64: Add support for get_freqtbl service
> > dt-bindings: cpufreq: add bindings for virtual kvm cpufreq
> > cpufreq: add kvm-cpufreq driver
>
> I only received patches 2-4 in my inbox (same goes for the mailing lists
> AFAICT). Mind sending the rest? :)
Oliver,
Sorry about that. Actually even I'm not cc'ed in the cover letter :)
Is it okay if we fix this when we send the next version? Mainly to
avoid some people responding to this vs other responding to a new
series (where the patches are the same).
We used a script for --to-cmd and --cc-cmd but looks like it needs
some more fixes.
Here is the full series to anyone who's wondering where the rest of
the patches are:
https://lore.kernel.org/lkml/20230330224348.1006691-1-davidai@google.com/T/#t
Thanks,
Saravana
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-03-30 23:36 ` Saravana Kannan
@ 2023-03-30 23:40 ` Oliver Upton
2023-03-31 0:34 ` Saravana Kannan
0 siblings, 1 reply; 27+ messages in thread
From: Oliver Upton @ 2023-03-30 23:40 UTC (permalink / raw)
To: Saravana Kannan
Cc: David Dai, Rafael J. Wysocki, Viresh Kumar, Rob Herring,
Krzysztof Kozlowski, Paolo Bonzini, Jonathan Corbet, Marc Zyngier,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Thu, Mar 30, 2023 at 04:36:52PM -0700, Saravana Kannan wrote:
> On Thu, Mar 30, 2023 at 4:20 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> >
> > [...]
> >
> > > David Dai (6):
> > > sched/fair: Add util_guest for tasks
> > > kvm: arm64: Add support for get_cur_cpufreq service
> > > kvm: arm64: Add support for util_hint service
> > > kvm: arm64: Add support for get_freqtbl service
> > > dt-bindings: cpufreq: add bindings for virtual kvm cpufreq
> > > cpufreq: add kvm-cpufreq driver
> >
> > I only received patches 2-4 in my inbox (same goes for the mailing lists
> > AFAICT). Mind sending the rest? :)
>
> Oliver,
>
> Sorry about that. Actually even I'm not cc'ed in the cover letter :)
>
> Is it okay if we fix this when we send the next version? Mainly to
> avoid some people responding to this vs other responding to a new
> series (where the patches are the same).
Fine by me, as long as the full series arrived somewhere.
> We used a script for --to-cmd and --cc-cmd but looks like it needs
> some more fixes.
>
> Here is the full series to anyone who's wondering where the rest of
> the patches are:
> https://lore.kernel.org/lkml/20230330224348.1006691-1-davidai@google.com/T/#t
Gah, a bit of noodling would've dug up the full series. Thanks for the
link.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-03-30 23:40 ` Oliver Upton
@ 2023-03-31 0:34 ` Saravana Kannan
0 siblings, 0 replies; 27+ messages in thread
From: Saravana Kannan @ 2023-03-31 0:34 UTC (permalink / raw)
To: Oliver Upton
Cc: David Dai, Rafael J. Wysocki, Viresh Kumar, Rob Herring,
Krzysztof Kozlowski, Paolo Bonzini, Jonathan Corbet, Marc Zyngier,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Thu, Mar 30, 2023 at 4:40 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Thu, Mar 30, 2023 at 04:36:52PM -0700, Saravana Kannan wrote:
> > On Thu, Mar 30, 2023 at 4:20 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> > >
> > > [...]
> > >
> > > > David Dai (6):
> > > > sched/fair: Add util_guest for tasks
> > > > kvm: arm64: Add support for get_cur_cpufreq service
> > > > kvm: arm64: Add support for util_hint service
> > > > kvm: arm64: Add support for get_freqtbl service
> > > > dt-bindings: cpufreq: add bindings for virtual kvm cpufreq
> > > > cpufreq: add kvm-cpufreq driver
> > >
> > > I only received patches 2-4 in my inbox (same goes for the mailing lists
> > > AFAICT). Mind sending the rest? :)
> >
> > Oliver,
> >
> > Sorry about that. Actually even I'm not cc'ed in the cover letter :)
> >
> > Is it okay if we fix this when we send the next version? Mainly to
> > avoid some people responding to this vs other responding to a new
> > series (where the patches are the same).
>
> Fine by me, as long as the full series arrived somewhere.
>
> > We used a script for --to-cmd and --cc-cmd but looks like it needs
> > some more fixes.
> >
> > Here is the full series to anyone who's wondering where the rest of
> > the patches are:
> > https://lore.kernel.org/lkml/20230330224348.1006691-1-davidai@google.com/T/#t
>
> Gah, a bit of noodling would've dug up the full series. Thanks for the
> link.
Actually, we'll send out a new RFC v2 series with the To's and Cc's
fixed with some minor cover letter fixes. So everyone can ignore this
series and just wait for the RFC v2 series later today.
-Saravana
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-03-30 22:43 [RFC PATCH 0/6] Improve VM DVFS and task placement behavior David Dai
` (3 preceding siblings ...)
2023-03-30 23:20 ` [RFC PATCH 0/6] Improve VM DVFS and task placement behavior Oliver Upton
@ 2023-03-31 0:49 ` Matthew Wilcox
2023-04-03 10:18 ` Mel Gorman
2023-04-04 19:43 ` Oliver Upton
` (2 subsequent siblings)
7 siblings, 1 reply; 27+ messages in thread
From: Matthew Wilcox @ 2023-03-31 0:49 UTC (permalink / raw)
To: David Dai
Cc: Rafael J. Wysocki, Viresh Kumar, Rob Herring, Krzysztof Kozlowski,
Paolo Bonzini, Jonathan Corbet, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> Hi,
>
> This patch series is a continuation of the talk Saravana gave at LPC 2022
> titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
> of the talk is that workloads running in a guest VM get terrible task
> placement and DVFS behavior when compared to running the same workload in
DVFS? Some new filesystem, perhaps?
> the host. Effectively, no EAS for threads inside VMs. This would make power
EAS?
Two unfamiliar and undefined acronyms in your opening paragraph.
You're not making me want to read the rest of your opus.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-03-31 0:49 ` Matthew Wilcox
@ 2023-04-03 10:18 ` Mel Gorman
0 siblings, 0 replies; 27+ messages in thread
From: Mel Gorman @ 2023-04-03 10:18 UTC (permalink / raw)
To: Matthew Wilcox
Cc: David Dai, Rafael J. Wysocki, Viresh Kumar, Rob Herring,
Krzysztof Kozlowski, Paolo Bonzini, Jonathan Corbet, Marc Zyngier,
Oliver Upton, James Morse, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon, Mark Rutland, Lorenzo Pieralisi,
Sudeep Holla, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Fri, Mar 31, 2023 at 01:49:48AM +0100, Matthew Wilcox wrote:
> On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> > Hi,
> >
> > This patch series is a continuation of the talk Saravana gave at LPC 2022
> > titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
> > of the talk is that workloads running in a guest VM get terrible task
> > placement and DVFS behavior when compared to running the same workload in
>
> DVFS? Some new filesystem, perhaps?
>
Dynamic Voltage and Frequency Scaling (DVFS) -- it's a well known term in
cpufreq/cpuidle/schedutil land.
> > the host. Effectively, no EAS for threads inside VMs. This would make power
>
> EAS?
>
Energy Aware Scheduling (EAS) is mostly a kernel/sched thing that has
an impact on cpufreq and my recollection is that it was discussed at
conferences long before kernel/sched had any EAS awareness. I don't have
the full series in my inbox and didn't dig further but patch 1 at least is
providing additional information to schedutil which impacts CPU frequency
selection on systems to varying degrees. The full impact would depend on
what cpufreq driver is in use and the specific hardware so even if the
series benefits one set of hardware, it's not necessarily a guaranteed win.
> Two unfamiliar and undefined acronyms in your opening paragraph.
> You're not making me want to read the rest of your opus.
It depends on the audience and mm/ is not the audience. VM in the title
refers to Virtual Machine, not Virtual Memory although I confess I originally
read it as mm/ and wondered initially how mm/ affects DVFS to the extent it
triggered a "wtf happened in mm/ recently that I completely missed?". This
series is mostly of concern to scheduler, cpufreq or KVM depending on your
perspective. For example, on KVM, I'd immediately wonder if the hypercall
overhead exceeds any benefit from better task placement although the leader
suggests the answer is "no". However, it didn't comment (or I didn't read
carefully enough) on whether MMIO overhead or alternative communication
methods have constant cost across different hardware or, much more likely,
depend on the hardware that could potentially opt-in. Various cpufreq
hardware has very different costs when measuring or alterating CPU frequency
stuff, even within different generations of chips from the same vendor.
While the data also shows performance improvements, it doesn't indicate how
close to bare metal the improvement is. Even if it's 50% faster within a
VM, how much slower than bare metal is it? In terms of data presentation,
it might be better to assign bare metal a score of 1 at the best possible
score and show the VM performance as a relative ratio (1.00 for bare metal,
0.5 for VM with a vanilla kernel, 0.75 using improved task placement).
It would also be preferred to have x86-64 data as the hazards the series
details with impacts arm64 and x86-64 has the additional challenge that
cpufreq is often managed by the hardware so it should be demonstrated the
the series "does no harm" on x86-64 for recent generation Intel and AMD
chips if possible. The lack of that data doesn't kill the series as a large
improvement is still very interesting even if it's not perfect and possible
specific to arm64. If this *was* my area or I happened to be paying close
attention to it at the time, I would likely favour using hypercalls only at
the start because it can be used universally and suggest adding alternative
communication methods later using the same metric "is an alternative method
of Guest<->Host communication worse, neutral or better at getting close to
bare metal performance?" I'd also push for the ratio tables as it's easier
to see at a glance how close to bare metal performance the series achieves.
Finally, I would look for x86-64 data just in case it causes harm due to
hypercall overhead on chips that management frequency in firmware.
So while I haven't read the series and only patches 2+6 reached by inbox,
I understand the point in principle. The scheduler on wakeup paths for bare
metal also tries to favour recently used CPUs and spurious CPU migration
even though it is only tangentially related to EAS. For example, a recently
used CPUs may still be polling (drivers/cpuidle/poll_state.c:poll_idle)
or at least not entered a deep C-state so the wakeup penalty is lower.
So whatever critism the series deserves, it's not due to using obscure
terms that no one in kernel/sched/, drivers/cpuidle of drivers/cpufreq
would recognise.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-03-30 22:43 [RFC PATCH 0/6] Improve VM DVFS and task placement behavior David Dai
` (4 preceding siblings ...)
2023-03-31 0:49 ` Matthew Wilcox
@ 2023-04-04 19:43 ` Oliver Upton
2023-04-04 20:49 ` Marc Zyngier
2023-04-05 8:05 ` Peter Zijlstra
2023-04-27 7:46 ` Pavan Kondeti
7 siblings, 1 reply; 27+ messages in thread
From: Oliver Upton @ 2023-04-04 19:43 UTC (permalink / raw)
To: David Dai
Cc: Rafael J. Wysocki, Viresh Kumar, Rob Herring, Krzysztof Kozlowski,
Paolo Bonzini, Jonathan Corbet, Marc Zyngier, James Morse,
Suzuki K Poulose, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Rutland, Lorenzo Pieralisi, Sudeep Holla, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
Folks,
On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
<snip>
> PCMark
> Higher is better
> +-------------------+----------+------------+--------+-------+--------+
> | Test Case (score) | Baseline | Hypercall | %delta | MMIO | %delta |
> +-------------------+----------+------------+--------+-------+--------+
> | Weighted Total | 6136 | 7274 | +19% | 6867 | +12% |
> +-------------------+----------+------------+--------+-------+--------+
> | Web Browsing | 5558 | 6273 | +13% | 6035 | +9% |
> +-------------------+----------+------------+--------+-------+--------+
> | Video Editing | 4921 | 5221 | +6% | 5167 | +5% |
> +-------------------+----------+------------+--------+-------+--------+
> | Writing | 6864 | 8825 | +29% | 8529 | +24% |
> +-------------------+----------+------------+--------+-------+--------+
> | Photo Editing | 7983 | 11593 | +45% | 10812 | +35% |
> +-------------------+----------+------------+--------+-------+--------+
> | Data Manipulation | 5814 | 6081 | +5% | 5327 | -8% |
> +-------------------+----------+------------+--------+-------+--------+
>
> PCMark Performance/mAh
> Higher is better
> +-----------+----------+-----------+--------+------+--------+
> | | Baseline | Hypercall | %delta | MMIO | %delta |
> +-----------+----------+-----------+--------+------+--------+
> | Score/mAh | 79 | 88 | +11% | 83 | +7% |
> +-----------+----------+-----------+--------+------+--------+
>
> Roblox
> Higher is better
> +-----+----------+------------+--------+-------+--------+
> | | Baseline | Hypercall | %delta | MMIO | %delta |
> +-----+----------+------------+--------+-------+--------+
> | FPS | 18.25 | 28.66 | +57% | 24.06 | +32% |
> +-----+----------+------------+--------+-------+--------+
>
> Roblox Frames/mAh
> Higher is better
> +------------+----------+------------+--------+--------+--------+
> | | Baseline | Hypercall | %delta | MMIO | %delta |
> +------------+----------+------------+--------+--------+--------+
> | Frames/mAh | 91.25 | 114.64 | +26% | 103.11 | +13% |
> +------------+----------+------------+--------+--------+--------+
</snip>
> Next steps:
> ===========
> We are continuing to look into communication mechanisms other than
> hypercalls that are just as/more efficient and avoid switching into the VMM
> userspace. Any inputs in this regard are greatly appreciated.
We're highly unlikely to entertain such an interface in KVM.
The entire feature is dependent on pinning vCPUs to physical cores, for which
userspace is in the driver's seat. That is a well established and documented
policy which can be seen in the way we handle heterogeneous systems and
vPMU.
Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
that I would not expect to benefit the typical user of KVM.
Based on the data above, it would appear that the userspace implementation is
in the same neighborhood as a KVM-based implementation, which only further
weakens the case for moving this into the kernel.
I certainly can appreciate the motivation for the series, but this feature
should be in userspace as some form of a virtual device.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-04 19:43 ` Oliver Upton
@ 2023-04-04 20:49 ` Marc Zyngier
2023-04-05 7:48 ` Quentin Perret
2023-04-05 21:00 ` Saravana Kannan
0 siblings, 2 replies; 27+ messages in thread
From: Marc Zyngier @ 2023-04-04 20:49 UTC (permalink / raw)
To: David Dai, Oliver Upton
Cc: Rafael J. Wysocki, Viresh Kumar, Rob Herring, Krzysztof Kozlowski,
Paolo Bonzini, Jonathan Corbet, James Morse, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Mark Rutland,
Lorenzo Pieralisi, Sudeep Holla, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Daniel Bristot de Oliveira,
Valentin Schneider, kernel-team, linux-pm, devicetree,
linux-kernel, kvm, linux-doc, linux-arm-kernel, kvmarm
On Tue, 04 Apr 2023 20:43:40 +0100,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Folks,
>
> On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
>
> <snip>
>
> > PCMark
> > Higher is better
> > +-------------------+----------+------------+--------+-------+--------+
> > | Test Case (score) | Baseline | Hypercall | %delta | MMIO | %delta |
> > +-------------------+----------+------------+--------+-------+--------+
> > | Weighted Total | 6136 | 7274 | +19% | 6867 | +12% |
> > +-------------------+----------+------------+--------+-------+--------+
> > | Web Browsing | 5558 | 6273 | +13% | 6035 | +9% |
> > +-------------------+----------+------------+--------+-------+--------+
> > | Video Editing | 4921 | 5221 | +6% | 5167 | +5% |
> > +-------------------+----------+------------+--------+-------+--------+
> > | Writing | 6864 | 8825 | +29% | 8529 | +24% |
> > +-------------------+----------+------------+--------+-------+--------+
> > | Photo Editing | 7983 | 11593 | +45% | 10812 | +35% |
> > +-------------------+----------+------------+--------+-------+--------+
> > | Data Manipulation | 5814 | 6081 | +5% | 5327 | -8% |
> > +-------------------+----------+------------+--------+-------+--------+
> >
> > PCMark Performance/mAh
> > Higher is better
> > +-----------+----------+-----------+--------+------+--------+
> > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > +-----------+----------+-----------+--------+------+--------+
> > | Score/mAh | 79 | 88 | +11% | 83 | +7% |
> > +-----------+----------+-----------+--------+------+--------+
> >
> > Roblox
> > Higher is better
> > +-----+----------+------------+--------+-------+--------+
> > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > +-----+----------+------------+--------+-------+--------+
> > | FPS | 18.25 | 28.66 | +57% | 24.06 | +32% |
> > +-----+----------+------------+--------+-------+--------+
> >
> > Roblox Frames/mAh
> > Higher is better
> > +------------+----------+------------+--------+--------+--------+
> > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > +------------+----------+------------+--------+--------+--------+
> > | Frames/mAh | 91.25 | 114.64 | +26% | 103.11 | +13% |
> > +------------+----------+------------+--------+--------+--------+
>
> </snip>
>
> > Next steps:
> > ===========
> > We are continuing to look into communication mechanisms other than
> > hypercalls that are just as/more efficient and avoid switching into the VMM
> > userspace. Any inputs in this regard are greatly appreciated.
>
> We're highly unlikely to entertain such an interface in KVM.
>
> The entire feature is dependent on pinning vCPUs to physical cores, for which
> userspace is in the driver's seat. That is a well established and documented
> policy which can be seen in the way we handle heterogeneous systems and
> vPMU.
>
> Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
> that I would not expect to benefit the typical user of KVM.
>
> Based on the data above, it would appear that the userspace implementation is
> in the same neighborhood as a KVM-based implementation, which only further
> weakens the case for moving this into the kernel.
>
> I certainly can appreciate the motivation for the series, but this feature
> should be in userspace as some form of a virtual device.
+1 on all of the above.
The one thing I'd like to understand that the comment seems to imply
that there is a significant difference in overhead between a hypercall
and an MMIO. In my experience, both are pretty similar in cost for a
handling location (both in userspace or both in the kernel). MMIO
handling is a tiny bit more expensive due to a guaranteed TLB miss
followed by a walk of the in-kernel device ranges, but that's all. It
should hardly register.
And if you really want some super-low latency, low overhead
signalling, maybe an exception is the wrong tool for the job. Shared
memory communication could be more appropriate.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-04 20:49 ` Marc Zyngier
@ 2023-04-05 7:48 ` Quentin Perret
2023-04-05 8:33 ` Vincent Guittot
2023-04-05 21:07 ` Saravana Kannan
2023-04-05 21:00 ` Saravana Kannan
1 sibling, 2 replies; 27+ messages in thread
From: Quentin Perret @ 2023-04-05 7:48 UTC (permalink / raw)
To: Marc Zyngier
Cc: David Dai, Oliver Upton, Rafael J. Wysocki, Viresh Kumar,
Rob Herring, Krzysztof Kozlowski, Paolo Bonzini, Jonathan Corbet,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Tuesday 04 Apr 2023 at 21:49:10 (+0100), Marc Zyngier wrote:
> On Tue, 04 Apr 2023 20:43:40 +0100,
> Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > Folks,
> >
> > On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> >
> > <snip>
> >
> > > PCMark
> > > Higher is better
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Test Case (score) | Baseline | Hypercall | %delta | MMIO | %delta |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Weighted Total | 6136 | 7274 | +19% | 6867 | +12% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Web Browsing | 5558 | 6273 | +13% | 6035 | +9% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Video Editing | 4921 | 5221 | +6% | 5167 | +5% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Writing | 6864 | 8825 | +29% | 8529 | +24% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Photo Editing | 7983 | 11593 | +45% | 10812 | +35% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Data Manipulation | 5814 | 6081 | +5% | 5327 | -8% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > >
> > > PCMark Performance/mAh
> > > Higher is better
> > > +-----------+----------+-----------+--------+------+--------+
> > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > +-----------+----------+-----------+--------+------+--------+
> > > | Score/mAh | 79 | 88 | +11% | 83 | +7% |
> > > +-----------+----------+-----------+--------+------+--------+
> > >
> > > Roblox
> > > Higher is better
> > > +-----+----------+------------+--------+-------+--------+
> > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > +-----+----------+------------+--------+-------+--------+
> > > | FPS | 18.25 | 28.66 | +57% | 24.06 | +32% |
> > > +-----+----------+------------+--------+-------+--------+
> > >
> > > Roblox Frames/mAh
> > > Higher is better
> > > +------------+----------+------------+--------+--------+--------+
> > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > +------------+----------+------------+--------+--------+--------+
> > > | Frames/mAh | 91.25 | 114.64 | +26% | 103.11 | +13% |
> > > +------------+----------+------------+--------+--------+--------+
> >
> > </snip>
> >
> > > Next steps:
> > > ===========
> > > We are continuing to look into communication mechanisms other than
> > > hypercalls that are just as/more efficient and avoid switching into the VMM
> > > userspace. Any inputs in this regard are greatly appreciated.
> >
> > We're highly unlikely to entertain such an interface in KVM.
> >
> > The entire feature is dependent on pinning vCPUs to physical cores, for which
> > userspace is in the driver's seat. That is a well established and documented
> > policy which can be seen in the way we handle heterogeneous systems and
> > vPMU.
> >
> > Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
> > that I would not expect to benefit the typical user of KVM.
> >
> > Based on the data above, it would appear that the userspace implementation is
> > in the same neighborhood as a KVM-based implementation, which only further
> > weakens the case for moving this into the kernel.
> >
> > I certainly can appreciate the motivation for the series, but this feature
> > should be in userspace as some form of a virtual device.
>
> +1 on all of the above.
And I concur with all the above as well. Putting this in the kernel is
not an obvious fit at all as that requires a number of assumptions about
the VMM.
As Oliver pointed out, the guest topology, and how it maps to the host
topology (vcpu pinning etc) is very much a VMM policy decision and will
be particularly important to handle guest frequency requests correctly.
In addition to that, the VMM's software architecture may have an impact.
Crosvm for example does device emulation in separate processes for
security reasons, so it is likely that adjusting the scheduling
parameters ('util_guest', uclamp, or else) only for the vCPU thread that
issues frequency requests will be sub-optimal for performance, we may
want to adjust those parameters for all the tasks that are on the
critical path.
And at an even higher level, assuming in the kernel a certain mapping of
vCPU threads to host threads feels kinda wrong, this too is a host
userspace policy decision I believe. Not that anybody in their right
mind would want to do this, but I _think_ it would technically be
feasible to serialize the execution of multiple vCPUs on the same host
thread, at which point the util_guest thingy becomes entirely bogus. (I
obviously don't want to conflate this use-case, it's just an example
that shows the proposed abstraction in the series is not a perfect fit
for the KVM userspace delegation model.)
So +1 from me to move this as a virtual device of some kind. And if the
extra cost of exiting all the way back to userspace is prohibitive (is
it btw?), then we can try to work on that. Maybe something a la vhost
can be done to optimize, I'll have a think.
> The one thing I'd like to understand that the comment seems to imply
> that there is a significant difference in overhead between a hypercall
> and an MMIO. In my experience, both are pretty similar in cost for a
> handling location (both in userspace or both in the kernel). MMIO
> handling is a tiny bit more expensive due to a guaranteed TLB miss
> followed by a walk of the in-kernel device ranges, but that's all. It
> should hardly register.
>
> And if you really want some super-low latency, low overhead
> signalling, maybe an exception is the wrong tool for the job. Shared
> memory communication could be more appropriate.
I presume some kind of signalling mechanism will be necessary to
synchronously update host scheduling parameters in response to guest
frequency requests, but if the volume of data requires it then a shared
buffer + doorbell type of approach should do.
Thinking about it, using SCMI over virtio would implement exactly that.
Linux-as-a-guest already supports it IIRC, so possibly the problem
being addressed in this series could be 'simply' solved using an SCMI
backend in the VMM...
Thanks,
Quentin
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/6] kvm: arm64: Add support for get_cur_cpufreq service
2023-03-30 22:43 ` [RFC PATCH 2/6] kvm: arm64: Add support for get_cur_cpufreq service David Dai
@ 2023-04-05 8:04 ` Quentin Perret
0 siblings, 0 replies; 27+ messages in thread
From: Quentin Perret @ 2023-04-05 8:04 UTC (permalink / raw)
To: David Dai
Cc: Paolo Bonzini, Jonathan Corbet, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Saravana Kannan, kernel-team, kvm, linux-doc, linux-kernel,
linux-arm-kernel, kvmarm
On Thursday 30 Mar 2023 at 15:43:37 (-0700), David Dai wrote:
> This service allows guests to query the host for frequency of the CPU
> that the vCPU is currently running on.
I assume the intention here is to achieve scale invariance in the guest
to ensure its PELT signals represent how much work is actually being
done. If so, it's likely the usage of activity monitors will be superior
for this type of thing as that may allow us to drop the baked-in
assumption about vCPU pinning. IIRC, AMUs v2 (arm64-specific obv) have
extended support for virtualization, so I'd suggest looking into
supporting that first.
And assuming we also want to support this on hardware that don't have
AMUs, or don't have the right virt extensions, then the only thing I can
think of is to have the VMM expose non-architectural AMUs to the guest,
maybe emulated using PMUs. If the guest uses Linux, it'll need to grow
support for non-architectural AMUs which is its own can of worms though.
Thanks,
Quentin
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-03-30 22:43 [RFC PATCH 0/6] Improve VM DVFS and task placement behavior David Dai
` (5 preceding siblings ...)
2023-04-04 19:43 ` Oliver Upton
@ 2023-04-05 8:05 ` Peter Zijlstra
2023-04-05 21:08 ` Saravana Kannan
2023-04-27 7:46 ` Pavan Kondeti
7 siblings, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2023-04-05 8:05 UTC (permalink / raw)
To: David Dai
Cc: Rafael J. Wysocki, Viresh Kumar, Rob Herring, Krzysztof Kozlowski,
Paolo Bonzini, Jonathan Corbet, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> Hi,
>
> This patch series is a continuation of the talk Saravana gave at LPC 2022
> titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
> of the talk is that workloads running in a guest VM get terrible task
> placement and DVFS behavior when compared to running the same workload in
> the host. Effectively, no EAS for threads inside VMs. This would make power
> and performance terrible just by running the workload in a VM even if we
> assume there is zero virtualization overhead.
>
> We have been iterating over different options for communicating between
> guest and host, ways of applying the information coming from the
> guest/host, etc to figure out the best performance and power improvements
> we could get.
>
> The patch series in its current state is NOT meant for landing in the
> upstream kernel. We are sending this patch series to share the current
> progress and data we have so far. The patch series is meant to be easy to
> cherry-pick and test on various devices to see what performance and power
> benefits this might give for others.
>
> With this series, a workload running in a VM gets the same task placement
> and DVFS treatment as it would when running in the host.
>
> As expected, we see significant performance improvement and better
> performance/power ratio. If anyone else wants to try this out for your VM
> workloads and report findings, that'd be very much appreciated.
>
> The idea is to improve VM CPUfreq/sched behavior by:
> - Having guest kernel to do accurate load tracking by taking host CPU
> arch/type and frequency into account.
> - Sharing vCPU run queue utilization information with the host so that the
> host can do proper frequency scaling and task placement on the host side.
So, not having actually been send many of the patches I've no idea what
you've done... Please, eradicate this ridiculous idea of sending random
people a random subset of a patch series. Either send all of it or none,
this is a bloody nuisance.
Having said that; my biggest worry is that you're making scheduler
internals into an ABI. I would hate for this paravirt interface to tie
us down.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-05 7:48 ` Quentin Perret
@ 2023-04-05 8:33 ` Vincent Guittot
2023-04-05 21:07 ` Saravana Kannan
1 sibling, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-04-05 8:33 UTC (permalink / raw)
To: Quentin Perret
Cc: Marc Zyngier, David Dai, Oliver Upton, Rafael J. Wysocki,
Viresh Kumar, Rob Herring, Krzysztof Kozlowski, Paolo Bonzini,
Jonathan Corbet, James Morse, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon, Mark Rutland, Lorenzo Pieralisi,
Sudeep Holla, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Wed, 5 Apr 2023 at 09:48, Quentin Perret <qperret@google.com> wrote:
>
> On Tuesday 04 Apr 2023 at 21:49:10 (+0100), Marc Zyngier wrote:
> > On Tue, 04 Apr 2023 20:43:40 +0100,
> > Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > Folks,
> > >
> > > On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> > >
> > > <snip>
> > >
> > > > PCMark
> > > > Higher is better
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Test Case (score) | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Weighted Total | 6136 | 7274 | +19% | 6867 | +12% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Web Browsing | 5558 | 6273 | +13% | 6035 | +9% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Video Editing | 4921 | 5221 | +6% | 5167 | +5% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Writing | 6864 | 8825 | +29% | 8529 | +24% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Photo Editing | 7983 | 11593 | +45% | 10812 | +35% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Data Manipulation | 5814 | 6081 | +5% | 5327 | -8% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > >
> > > > PCMark Performance/mAh
> > > > Higher is better
> > > > +-----------+----------+-----------+--------+------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-----------+----------+-----------+--------+------+--------+
> > > > | Score/mAh | 79 | 88 | +11% | 83 | +7% |
> > > > +-----------+----------+-----------+--------+------+--------+
> > > >
> > > > Roblox
> > > > Higher is better
> > > > +-----+----------+------------+--------+-------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-----+----------+------------+--------+-------+--------+
> > > > | FPS | 18.25 | 28.66 | +57% | 24.06 | +32% |
> > > > +-----+----------+------------+--------+-------+--------+
> > > >
> > > > Roblox Frames/mAh
> > > > Higher is better
> > > > +------------+----------+------------+--------+--------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +------------+----------+------------+--------+--------+--------+
> > > > | Frames/mAh | 91.25 | 114.64 | +26% | 103.11 | +13% |
> > > > +------------+----------+------------+--------+--------+--------+
> > >
> > > </snip>
> > >
> > > > Next steps:
> > > > ===========
> > > > We are continuing to look into communication mechanisms other than
> > > > hypercalls that are just as/more efficient and avoid switching into the VMM
> > > > userspace. Any inputs in this regard are greatly appreciated.
> > >
> > > We're highly unlikely to entertain such an interface in KVM.
> > >
> > > The entire feature is dependent on pinning vCPUs to physical cores, for which
> > > userspace is in the driver's seat. That is a well established and documented
> > > policy which can be seen in the way we handle heterogeneous systems and
> > > vPMU.
> > >
> > > Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
> > > that I would not expect to benefit the typical user of KVM.
> > >
> > > Based on the data above, it would appear that the userspace implementation is
> > > in the same neighborhood as a KVM-based implementation, which only further
> > > weakens the case for moving this into the kernel.
> > >
> > > I certainly can appreciate the motivation for the series, but this feature
> > > should be in userspace as some form of a virtual device.
> >
> > +1 on all of the above.
>
> And I concur with all the above as well. Putting this in the kernel is
> not an obvious fit at all as that requires a number of assumptions about
> the VMM.
>
> As Oliver pointed out, the guest topology, and how it maps to the host
> topology (vcpu pinning etc) is very much a VMM policy decision and will
> be particularly important to handle guest frequency requests correctly.
>
> In addition to that, the VMM's software architecture may have an impact.
> Crosvm for example does device emulation in separate processes for
> security reasons, so it is likely that adjusting the scheduling
> parameters ('util_guest', uclamp, or else) only for the vCPU thread that
> issues frequency requests will be sub-optimal for performance, we may
> want to adjust those parameters for all the tasks that are on the
> critical path.
>
> And at an even higher level, assuming in the kernel a certain mapping of
> vCPU threads to host threads feels kinda wrong, this too is a host
> userspace policy decision I believe. Not that anybody in their right
> mind would want to do this, but I _think_ it would technically be
> feasible to serialize the execution of multiple vCPUs on the same host
> thread, at which point the util_guest thingy becomes entirely bogus. (I
> obviously don't want to conflate this use-case, it's just an example
> that shows the proposed abstraction in the series is not a perfect fit
> for the KVM userspace delegation model.)
>
> So +1 from me to move this as a virtual device of some kind. And if the
> extra cost of exiting all the way back to userspace is prohibitive (is
> it btw?), then we can try to work on that. Maybe something a la vhost
> can be done to optimize, I'll have a think.
>
> > The one thing I'd like to understand that the comment seems to imply
> > that there is a significant difference in overhead between a hypercall
> > and an MMIO. In my experience, both are pretty similar in cost for a
> > handling location (both in userspace or both in the kernel). MMIO
> > handling is a tiny bit more expensive due to a guaranteed TLB miss
> > followed by a walk of the in-kernel device ranges, but that's all. It
> > should hardly register.
> >
> > And if you really want some super-low latency, low overhead
> > signalling, maybe an exception is the wrong tool for the job. Shared
> > memory communication could be more appropriate.
>
> I presume some kind of signalling mechanism will be necessary to
> synchronously update host scheduling parameters in response to guest
> frequency requests, but if the volume of data requires it then a shared
> buffer + doorbell type of approach should do.
>
> Thinking about it, using SCMI over virtio would implement exactly that.
> Linux-as-a-guest already supports it IIRC, so possibly the problem
> being addressed in this series could be 'simply' solved using an SCMI
> backend in the VMM...
This is what was suggested at LPC:
using virtio-scmi and scmi performance domain in the guest for cpufreq driver
using a vhost user scmi backend in user space
from this vhost userspace backend updates the uclamp min of the vCPU
thread or use another method is this one is not good enough
>
> Thanks,
> Quentin
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-04 20:49 ` Marc Zyngier
2023-04-05 7:48 ` Quentin Perret
@ 2023-04-05 21:00 ` Saravana Kannan
2023-04-06 8:42 ` Marc Zyngier
1 sibling, 1 reply; 27+ messages in thread
From: Saravana Kannan @ 2023-04-05 21:00 UTC (permalink / raw)
To: Marc Zyngier
Cc: David Dai, Oliver Upton, Rafael J. Wysocki, Viresh Kumar,
Rob Herring, Krzysztof Kozlowski, Paolo Bonzini, Jonathan Corbet,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Tue, Apr 4, 2023 at 1:49 PM Marc Zyngier <maz@kernel.org> wrote:
>
> On Tue, 04 Apr 2023 20:43:40 +0100,
> Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > Folks,
> >
> > On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> >
> > <snip>
> >
> > > PCMark
> > > Higher is better
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Test Case (score) | Baseline | Hypercall | %delta | MMIO | %delta |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Weighted Total | 6136 | 7274 | +19% | 6867 | +12% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Web Browsing | 5558 | 6273 | +13% | 6035 | +9% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Video Editing | 4921 | 5221 | +6% | 5167 | +5% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Writing | 6864 | 8825 | +29% | 8529 | +24% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Photo Editing | 7983 | 11593 | +45% | 10812 | +35% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Data Manipulation | 5814 | 6081 | +5% | 5327 | -8% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > >
> > > PCMark Performance/mAh
> > > Higher is better
> > > +-----------+----------+-----------+--------+------+--------+
> > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > +-----------+----------+-----------+--------+------+--------+
> > > | Score/mAh | 79 | 88 | +11% | 83 | +7% |
> > > +-----------+----------+-----------+--------+------+--------+
> > >
> > > Roblox
> > > Higher is better
> > > +-----+----------+------------+--------+-------+--------+
> > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > +-----+----------+------------+--------+-------+--------+
> > > | FPS | 18.25 | 28.66 | +57% | 24.06 | +32% |
> > > +-----+----------+------------+--------+-------+--------+
> > >
> > > Roblox Frames/mAh
> > > Higher is better
> > > +------------+----------+------------+--------+--------+--------+
> > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > +------------+----------+------------+--------+--------+--------+
> > > | Frames/mAh | 91.25 | 114.64 | +26% | 103.11 | +13% |
> > > +------------+----------+------------+--------+--------+--------+
> >
> > </snip>
> >
> > > Next steps:
> > > ===========
> > > We are continuing to look into communication mechanisms other than
> > > hypercalls that are just as/more efficient and avoid switching into the VMM
> > > userspace. Any inputs in this regard are greatly appreciated.
Hi Oliver and Marc,
Replying to both of you in this one email.
> >
> > We're highly unlikely to entertain such an interface in KVM.
> >
> > The entire feature is dependent on pinning vCPUs to physical cores, for which
> > userspace is in the driver's seat. That is a well established and documented
> > policy which can be seen in the way we handle heterogeneous systems and
> > vPMU.
> >
> > Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
> > that I would not expect to benefit the typical user of KVM.
> >
> > Based on the data above, it would appear that the userspace implementation is
> > in the same neighborhood as a KVM-based implementation, which only further
> > weakens the case for moving this into the kernel.
Oliver,
Sorry if the tables/data aren't presented in an intuitive way, but
MMIO vs hypercall is definitely not in the same neighborhood. The
hypercall method often gives close to 2x the improvement that the MMIO
method gives. For example:
- Roblox FPS: MMIO improves it by 32% vs hypercall improves it by 57%.
- Frames/mAh: MMIO improves it by 13% vs hypercall improves it by 26%.
- PC Mark Data manipulation: MMIO makes it worse by 8% vs hypercall
improves it by 5%
Hypercall does better for other cases too, just not as good. For example,
- PC Mark Photo editing: Going from MMIO to hypercall gives a 10% improvement.
These are all pretty non-trivial, at least in the mobile world. Heck,
whole teams would spend months for 2% improvement in battery :)
> >
> > I certainly can appreciate the motivation for the series, but this feature
> > should be in userspace as some form of a virtual device.
>
> +1 on all of the above.
Marc and Oliver,
We are not tied to hypercalls. We want to do the right thing here, but
MMIO going all the way to userspace definitely doesn't cut it as is.
This is where we need some guidance. See more below.
> The one thing I'd like to understand that the comment seems to imply
> that there is a significant difference in overhead between a hypercall
> and an MMIO. In my experience, both are pretty similar in cost for a
> handling location (both in userspace or both in the kernel).
I think the main difference really is that in our hypercall vs MMIO
comparison the hypercall is handled in the kernel vs MMIO goes all the
way to userspace. I agree with you that the difference probably won't
be significant if both of them go to the same "depth" in the privilege
levels.
> MMIO
> handling is a tiny bit more expensive due to a guaranteed TLB miss
> followed by a walk of the in-kernel device ranges, but that's all. It
> should hardly register.
>
> And if you really want some super-low latency, low overhead
> signalling, maybe an exception is the wrong tool for the job. Shared
> memory communication could be more appropriate.
Yeah, that's one of our next steps. Ideally, we want to use shared
memory for the host to guest information flow. It's a 32-bit value
representing the current frequency that the host can update whenever
the host CPU frequency changes and the guest can read whenever it
needs it.
For guest to host information flow, we'll need a kick from guest to
host because we need to take action on the host side when threads
migrate between vCPUs and cause a significant change in vCPU util.
Again it can be just a shared memory and some kick. This is what we
are currently trying to figure out how to do.
If there are APIs to do this, can you point us to those please? We'd
also want the shared memory to be accessible by the VMM (so, shared
between guest kernel, host kernel and VMM).
Are the above next steps sane? Or is that a no-go? The main thing we
want to cut out is the need for having to switch to userspace for
every single interaction because, as is, it leaves a lot on the table.
Also, thanks for all the feedback. Glad to receive it.
-Saravana
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-05 7:48 ` Quentin Perret
2023-04-05 8:33 ` Vincent Guittot
@ 2023-04-05 21:07 ` Saravana Kannan
2023-04-06 12:52 ` Quentin Perret
1 sibling, 1 reply; 27+ messages in thread
From: Saravana Kannan @ 2023-04-05 21:07 UTC (permalink / raw)
To: Quentin Perret
Cc: Marc Zyngier, David Dai, Oliver Upton, Rafael J. Wysocki,
Viresh Kumar, Rob Herring, Krzysztof Kozlowski, Paolo Bonzini,
Jonathan Corbet, James Morse, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon, Mark Rutland, Lorenzo Pieralisi,
Sudeep Holla, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Daniel Bristot de Oliveira, Valentin Schneider,
kernel-team, linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Wed, Apr 5, 2023 at 12:48 AM 'Quentin Perret' via kernel-team
<kernel-team@android.com> wrote:
>
> On Tuesday 04 Apr 2023 at 21:49:10 (+0100), Marc Zyngier wrote:
> > On Tue, 04 Apr 2023 20:43:40 +0100,
> > Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > Folks,
> > >
> > > On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> > >
> > > <snip>
> > >
> > > > PCMark
> > > > Higher is better
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Test Case (score) | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Weighted Total | 6136 | 7274 | +19% | 6867 | +12% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Web Browsing | 5558 | 6273 | +13% | 6035 | +9% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Video Editing | 4921 | 5221 | +6% | 5167 | +5% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Writing | 6864 | 8825 | +29% | 8529 | +24% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Photo Editing | 7983 | 11593 | +45% | 10812 | +35% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Data Manipulation | 5814 | 6081 | +5% | 5327 | -8% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > >
> > > > PCMark Performance/mAh
> > > > Higher is better
> > > > +-----------+----------+-----------+--------+------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-----------+----------+-----------+--------+------+--------+
> > > > | Score/mAh | 79 | 88 | +11% | 83 | +7% |
> > > > +-----------+----------+-----------+--------+------+--------+
> > > >
> > > > Roblox
> > > > Higher is better
> > > > +-----+----------+------------+--------+-------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-----+----------+------------+--------+-------+--------+
> > > > | FPS | 18.25 | 28.66 | +57% | 24.06 | +32% |
> > > > +-----+----------+------------+--------+-------+--------+
> > > >
> > > > Roblox Frames/mAh
> > > > Higher is better
> > > > +------------+----------+------------+--------+--------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +------------+----------+------------+--------+--------+--------+
> > > > | Frames/mAh | 91.25 | 114.64 | +26% | 103.11 | +13% |
> > > > +------------+----------+------------+--------+--------+--------+
> > >
> > > </snip>
> > >
> > > > Next steps:
> > > > ===========
> > > > We are continuing to look into communication mechanisms other than
> > > > hypercalls that are just as/more efficient and avoid switching into the VMM
> > > > userspace. Any inputs in this regard are greatly appreciated.
> > >
> > > We're highly unlikely to entertain such an interface in KVM.
> > >
> > > The entire feature is dependent on pinning vCPUs to physical cores, for which
> > > userspace is in the driver's seat. That is a well established and documented
> > > policy which can be seen in the way we handle heterogeneous systems and
> > > vPMU.
> > >
> > > Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
> > > that I would not expect to benefit the typical user of KVM.
> > >
> > > Based on the data above, it would appear that the userspace implementation is
> > > in the same neighborhood as a KVM-based implementation, which only further
> > > weakens the case for moving this into the kernel.
> > >
> > > I certainly can appreciate the motivation for the series, but this feature
> > > should be in userspace as some form of a virtual device.
> >
> > +1 on all of the above.
>
> And I concur with all the above as well. Putting this in the kernel is
> not an obvious fit at all as that requires a number of assumptions about
> the VMM.
>
> As Oliver pointed out, the guest topology, and how it maps to the host
> topology (vcpu pinning etc) is very much a VMM policy decision and will
> be particularly important to handle guest frequency requests correctly.
>
> In addition to that, the VMM's software architecture may have an impact.
> Crosvm for example does device emulation in separate processes for
> security reasons, so it is likely that adjusting the scheduling
> parameters ('util_guest', uclamp, or else) only for the vCPU thread that
> issues frequency requests will be sub-optimal for performance, we may
> want to adjust those parameters for all the tasks that are on the
> critical path.
>
> And at an even higher level, assuming in the kernel a certain mapping of
> vCPU threads to host threads feels kinda wrong, this too is a host
> userspace policy decision I believe. Not that anybody in their right
> mind would want to do this, but I _think_ it would technically be
> feasible to serialize the execution of multiple vCPUs on the same host
> thread, at which point the util_guest thingy becomes entirely bogus. (I
> obviously don't want to conflate this use-case, it's just an example
> that shows the proposed abstraction in the series is not a perfect fit
> for the KVM userspace delegation model.)
See my reply to Oliver and Marc. To me it looks like we are converging
towards having shared memory between guest, host kernel and VMM and
that should address all our concerns.
The guest will see a MMIO device, writing to it will trigger the host
kernel to do the basic "set util_guest/uclamp for the vCPU thread that
corresponds to the vCPU" and then the VMM can do more on top as/if
needed (because it has access to the shared memory too). Does that
make sense?
Even in the extreme example, the stuff the kernel would do would still
be helpful, but not sufficient. You can aggregate the
util_guest/uclamp and do whatever from the VMM.
Technically in the extreme example, you don't need any of this. The
normal util tracking of the vCPU thread on the host side would be
sufficient.
Actually any time we have only 1 vCPU host thread per VM, we shouldn't
be using anything in this patch series and not instantiate the guest
device at all.
> So +1 from me to move this as a virtual device of some kind. And if the
> extra cost of exiting all the way back to userspace is prohibitive (is
> it btw?),
I think the "13% increase in battery consumption for games" makes it
pretty clear that going to userspace is prohibitive. And that's just
one example.
> then we can try to work on that. Maybe something a la vhost
> can be done to optimize, I'll have a think.
>
> > The one thing I'd like to understand that the comment seems to imply
> > that there is a significant difference in overhead between a hypercall
> > and an MMIO. In my experience, both are pretty similar in cost for a
> > handling location (both in userspace or both in the kernel). MMIO
> > handling is a tiny bit more expensive due to a guaranteed TLB miss
> > followed by a walk of the in-kernel device ranges, but that's all. It
> > should hardly register.
> >
> > And if you really want some super-low latency, low overhead
> > signalling, maybe an exception is the wrong tool for the job. Shared
> > memory communication could be more appropriate.
>
> I presume some kind of signalling mechanism will be necessary to
> synchronously update host scheduling parameters in response to guest
> frequency requests, but if the volume of data requires it then a shared
> buffer + doorbell type of approach should do.
Part of the communication doesn't need synchronous handling by the
host. So, what I said above.
> Thinking about it, using SCMI over virtio would implement exactly that.
> Linux-as-a-guest already supports it IIRC, so possibly the problem
> being addressed in this series could be 'simply' solved using an SCMI
> backend in the VMM...
This will be worse than all the options we've tried so far because it
has the userspace overhead AND uclamp overhead.
-Saravana
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-05 8:05 ` Peter Zijlstra
@ 2023-04-05 21:08 ` Saravana Kannan
2023-04-06 7:36 ` Peter Zijlstra
2023-04-06 7:38 ` Peter Zijlstra
0 siblings, 2 replies; 27+ messages in thread
From: Saravana Kannan @ 2023-04-05 21:08 UTC (permalink / raw)
To: Peter Zijlstra
Cc: David Dai, Rafael J. Wysocki, Viresh Kumar, Rob Herring,
Krzysztof Kozlowski, Paolo Bonzini, Jonathan Corbet, Marc Zyngier,
Oliver Upton, James Morse, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon, Mark Rutland, Lorenzo Pieralisi,
Sudeep Holla, Ingo Molnar, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Wed, Apr 5, 2023 at 1:06 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> > Hi,
> >
> > This patch series is a continuation of the talk Saravana gave at LPC 2022
> > titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
> > of the talk is that workloads running in a guest VM get terrible task
> > placement and DVFS behavior when compared to running the same workload in
> > the host. Effectively, no EAS for threads inside VMs. This would make power
> > and performance terrible just by running the workload in a VM even if we
> > assume there is zero virtualization overhead.
> >
> > We have been iterating over different options for communicating between
> > guest and host, ways of applying the information coming from the
> > guest/host, etc to figure out the best performance and power improvements
> > we could get.
> >
> > The patch series in its current state is NOT meant for landing in the
> > upstream kernel. We are sending this patch series to share the current
> > progress and data we have so far. The patch series is meant to be easy to
> > cherry-pick and test on various devices to see what performance and power
> > benefits this might give for others.
> >
> > With this series, a workload running in a VM gets the same task placement
> > and DVFS treatment as it would when running in the host.
> >
> > As expected, we see significant performance improvement and better
> > performance/power ratio. If anyone else wants to try this out for your VM
> > workloads and report findings, that'd be very much appreciated.
> >
> > The idea is to improve VM CPUfreq/sched behavior by:
> > - Having guest kernel to do accurate load tracking by taking host CPU
> > arch/type and frequency into account.
> > - Sharing vCPU run queue utilization information with the host so that the
> > host can do proper frequency scaling and task placement on the host side.
>
> So, not having actually been send many of the patches I've no idea what
> you've done... Please, eradicate this ridiculous idea of sending random
> people a random subset of a patch series. Either send all of it or none,
> this is a bloody nuisance.
Sorry, that was our intention, but had a scripting error. It's been fixed.
I have a script to use with git send-email's --to-cmd and --cc-cmd
option. It uses get_maintainers.pl to figure out who to email, but it
gets trickier for a patch series that spans maintainer trees.
v2 and later will have everyone get all the patches.
> Having said that; my biggest worry is that you're making scheduler
> internals into an ABI. I would hate for this paravirt interface to tie
> us down.
The only 2 pieces of information shared between host/guest are:
1. Host CPU frequency -- this isn't really scheduler internals and
will map nicely to a virtual cpufreq driver.
2. A vCPU util value between 0 - 1024 where 1024 corresponds to the
highest performance point across all CPUs (taking freq, arch, etc into
consideration). Yes, this currently matches how the run queue util is
tracked, but we can document the interface as "percentage of max
performance capability", but representing it as 0 - 1024 instead of
0-100. That way, even if the scheduler changes how it tracks util in
the future, we can still keep this interface between guest/host and
map it appropriately on the host end.
In either case, we could even have a Windows guest where they might
track vCPU utilization differently and still have this work with the
Linux host with this interface.
Does that sound reasonable to you?
Another option is to convert (2) into a "CPU frequency" request (but
without latching it to values in the CPUfreq table) but it'll add some
unnecessary math (with division) on the guest and host end. But I'd
rather keep it as 0-1024 unless you really want this 2nd option.
-Saravana
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-05 21:08 ` Saravana Kannan
@ 2023-04-06 7:36 ` Peter Zijlstra
2023-04-06 7:38 ` Peter Zijlstra
1 sibling, 0 replies; 27+ messages in thread
From: Peter Zijlstra @ 2023-04-06 7:36 UTC (permalink / raw)
To: Saravana Kannan
Cc: David Dai, Rafael J. Wysocki, Viresh Kumar, Rob Herring,
Krzysztof Kozlowski, Paolo Bonzini, Jonathan Corbet, Marc Zyngier,
Oliver Upton, James Morse, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon, Mark Rutland, Lorenzo Pieralisi,
Sudeep Holla, Ingo Molnar, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Wed, Apr 05, 2023 at 02:08:43PM -0700, Saravana Kannan wrote:
> Sorry, that was our intention, but had a scripting error. It's been fixed.
>
> I have a script to use with git send-email's --to-cmd and --cc-cmd
> option. It uses get_maintainers.pl to figure out who to email, but it
> gets trickier for a patch series that spans maintainer trees.
What I do is I simply run get_maintainers.pl against the full series
diff and CC everybody the same.
Then again, I don't use git-send-email, so I've no idea how to use that.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-05 21:08 ` Saravana Kannan
2023-04-06 7:36 ` Peter Zijlstra
@ 2023-04-06 7:38 ` Peter Zijlstra
1 sibling, 0 replies; 27+ messages in thread
From: Peter Zijlstra @ 2023-04-06 7:38 UTC (permalink / raw)
To: Saravana Kannan
Cc: David Dai, Rafael J. Wysocki, Viresh Kumar, Rob Herring,
Krzysztof Kozlowski, Paolo Bonzini, Jonathan Corbet, Marc Zyngier,
Oliver Upton, James Morse, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon, Mark Rutland, Lorenzo Pieralisi,
Sudeep Holla, Ingo Molnar, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Wed, Apr 05, 2023 at 02:08:43PM -0700, Saravana Kannan wrote:
> The only 2 pieces of information shared between host/guest are:
>
> 1. Host CPU frequency -- this isn't really scheduler internals and
> will map nicely to a virtual cpufreq driver.
>
> 2. A vCPU util value between 0 - 1024 where 1024 corresponds to the
> highest performance point across all CPUs (taking freq, arch, etc into
> consideration). Yes, this currently matches how the run queue util is
> tracked, but we can document the interface as "percentage of max
> performance capability", but representing it as 0 - 1024 instead of
> 0-100. That way, even if the scheduler changes how it tracks util in
> the future, we can still keep this interface between guest/host and
> map it appropriately on the host end.
>
> In either case, we could even have a Windows guest where they might
> track vCPU utilization differently and still have this work with the
> Linux host with this interface.
>
> Does that sound reasonable to you?
Yeah, I suppose that's managable.
Something that wasn't initially clear to me; all this hard assumes a 1:1
vCPU:CPU relation, right? Which isn't typical in virt land.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-05 21:00 ` Saravana Kannan
@ 2023-04-06 8:42 ` Marc Zyngier
0 siblings, 0 replies; 27+ messages in thread
From: Marc Zyngier @ 2023-04-06 8:42 UTC (permalink / raw)
To: Saravana Kannan
Cc: David Dai, Oliver Upton, Rafael J. Wysocki, Viresh Kumar,
Rob Herring, Krzysztof Kozlowski, Paolo Bonzini, Jonathan Corbet,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Wed, 05 Apr 2023 22:00:59 +0100,
Saravana Kannan <saravanak@google.com> wrote:
>
> On Tue, Apr 4, 2023 at 1:49 PM Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Tue, 04 Apr 2023 20:43:40 +0100,
> > Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > Folks,
> > >
> > > On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> > >
> > > <snip>
> > >
> > > > PCMark
> > > > Higher is better
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Test Case (score) | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Weighted Total | 6136 | 7274 | +19% | 6867 | +12% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Web Browsing | 5558 | 6273 | +13% | 6035 | +9% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Video Editing | 4921 | 5221 | +6% | 5167 | +5% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Writing | 6864 | 8825 | +29% | 8529 | +24% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Photo Editing | 7983 | 11593 | +45% | 10812 | +35% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > > | Data Manipulation | 5814 | 6081 | +5% | 5327 | -8% |
> > > > +-------------------+----------+------------+--------+-------+--------+
> > > >
> > > > PCMark Performance/mAh
> > > > Higher is better
> > > > +-----------+----------+-----------+--------+------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-----------+----------+-----------+--------+------+--------+
> > > > | Score/mAh | 79 | 88 | +11% | 83 | +7% |
> > > > +-----------+----------+-----------+--------+------+--------+
> > > >
> > > > Roblox
> > > > Higher is better
> > > > +-----+----------+------------+--------+-------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +-----+----------+------------+--------+-------+--------+
> > > > | FPS | 18.25 | 28.66 | +57% | 24.06 | +32% |
> > > > +-----+----------+------------+--------+-------+--------+
> > > >
> > > > Roblox Frames/mAh
> > > > Higher is better
> > > > +------------+----------+------------+--------+--------+--------+
> > > > | | Baseline | Hypercall | %delta | MMIO | %delta |
> > > > +------------+----------+------------+--------+--------+--------+
> > > > | Frames/mAh | 91.25 | 114.64 | +26% | 103.11 | +13% |
> > > > +------------+----------+------------+--------+--------+--------+
> > >
> > > </snip>
> > >
> > > > Next steps:
> > > > ===========
> > > > We are continuing to look into communication mechanisms other than
> > > > hypercalls that are just as/more efficient and avoid switching into the VMM
> > > > userspace. Any inputs in this regard are greatly appreciated.
>
> Hi Oliver and Marc,
>
> Replying to both of you in this one email.
>
> > >
> > > We're highly unlikely to entertain such an interface in KVM.
> > >
> > > The entire feature is dependent on pinning vCPUs to physical cores, for which
> > > userspace is in the driver's seat. That is a well established and documented
> > > policy which can be seen in the way we handle heterogeneous systems and
> > > vPMU.
> > >
> > > Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
> > > that I would not expect to benefit the typical user of KVM.
> > >
> > > Based on the data above, it would appear that the userspace implementation is
> > > in the same neighborhood as a KVM-based implementation, which only further
> > > weakens the case for moving this into the kernel.
>
> Oliver,
>
> Sorry if the tables/data aren't presented in an intuitive way, but
> MMIO vs hypercall is definitely not in the same neighborhood. The
> hypercall method often gives close to 2x the improvement that the MMIO
> method gives. For example:
>
> - Roblox FPS: MMIO improves it by 32% vs hypercall improves it by 57%.
> - Frames/mAh: MMIO improves it by 13% vs hypercall improves it by 26%.
> - PC Mark Data manipulation: MMIO makes it worse by 8% vs hypercall
> improves it by 5%
>
> Hypercall does better for other cases too, just not as good. For example,
> - PC Mark Photo editing: Going from MMIO to hypercall gives a 10% improvement.
>
> These are all pretty non-trivial, at least in the mobile world. Heck,
> whole teams would spend months for 2% improvement in battery :)
>
> > >
> > > I certainly can appreciate the motivation for the series, but this feature
> > > should be in userspace as some form of a virtual device.
> >
> > +1 on all of the above.
>
> Marc and Oliver,
>
> We are not tied to hypercalls. We want to do the right thing here, but
> MMIO going all the way to userspace definitely doesn't cut it as is.
> This is where we need some guidance. See more below.
I don't buy this assertion at all. An MMIO in userspace is already
much better than nothing. One of my many objection to the whole series
is that it is built as a massively invasive thing that has too many
fingers in too many pies, with unsustainable assumptions such as 1:1
mapping between CPU and vCPUs.
I'd rather you build something simple first (pure userspace using
MMIOs), work out where the bottlenecks are, and work with us to add
what is needed to get to something sensible, and only that. I'm not
willing to sacrifice maintainability for maximum performance (the
whole thing reminds me of the in-kernel http server...).
>
> > The one thing I'd like to understand that the comment seems to imply
> > that there is a significant difference in overhead between a hypercall
> > and an MMIO. In my experience, both are pretty similar in cost for a
> > handling location (both in userspace or both in the kernel).
>
> I think the main difference really is that in our hypercall vs MMIO
> comparison the hypercall is handled in the kernel vs MMIO goes all the
> way to userspace. I agree with you that the difference probably won't
> be significant if both of them go to the same "depth" in the privilege
> levels.
>
> > MMIO
> > handling is a tiny bit more expensive due to a guaranteed TLB miss
> > followed by a walk of the in-kernel device ranges, but that's all. It
> > should hardly register.
> >
> > And if you really want some super-low latency, low overhead
> > signalling, maybe an exception is the wrong tool for the job. Shared
> > memory communication could be more appropriate.
>
> Yeah, that's one of our next steps. Ideally, we want to use shared
> memory for the host to guest information flow. It's a 32-bit value
> representing the current frequency that the host can update whenever
> the host CPU frequency changes and the guest can read whenever it
> needs it.
Why should the guest care? Why can't the guest ask for an arbitrary
capacity, and get what it gets? You give no information as to *why*
you are doing what you are doing...
>
> For guest to host information flow, we'll need a kick from guest to
> host because we need to take action on the host side when threads
> migrate between vCPUs and cause a significant change in vCPU util.
> Again it can be just a shared memory and some kick. This is what we
> are currently trying to figure out how to do.
That kick would have to go to userspace. There is no way I'm willing
to introduce scheduling primitives inside KVM (the ones we have are
ridiculously bad anyway), and I very much want to avoid extra PV gunk.
> If there are APIs to do this, can you point us to those please? We'd
> also want the shared memory to be accessible by the VMM (so, shared
> between guest kernel, host kernel and VMM).
By default, *ALL* the memory is shared. Isn't that wonderful?
>
> Are the above next steps sane? Or is that a no-go? The main thing we
> want to cut out is the need for having to switch to userspace for
> every single interaction because, as is, it leaves a lot on the table.
Well, for a start, you could disclose how often you hit this DVFS
"device", and when are the critical state changes that must happen
immediately vs those that can simply be posted without having to take
immediate effect.
This sort of information would be much more interesting than a bunch
of benchmarks I know nothing about.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-05 21:07 ` Saravana Kannan
@ 2023-04-06 12:52 ` Quentin Perret
2023-04-06 21:39 ` David Dai
0 siblings, 1 reply; 27+ messages in thread
From: Quentin Perret @ 2023-04-06 12:52 UTC (permalink / raw)
To: Saravana Kannan
Cc: Marc Zyngier, David Dai, Oliver Upton, Rafael J. Wysocki,
Viresh Kumar, Rob Herring, Krzysztof Kozlowski, Paolo Bonzini,
Jonathan Corbet, James Morse, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon, Mark Rutland, Lorenzo Pieralisi,
Sudeep Holla, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Daniel Bristot de Oliveira, Valentin Schneider,
kernel-team, linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Wednesday 05 Apr 2023 at 14:07:18 (-0700), Saravana Kannan wrote:
> On Wed, Apr 5, 2023 at 12:48 AM 'Quentin Perret' via kernel-team
> > And I concur with all the above as well. Putting this in the kernel is
> > not an obvious fit at all as that requires a number of assumptions about
> > the VMM.
> >
> > As Oliver pointed out, the guest topology, and how it maps to the host
> > topology (vcpu pinning etc) is very much a VMM policy decision and will
> > be particularly important to handle guest frequency requests correctly.
> >
> > In addition to that, the VMM's software architecture may have an impact.
> > Crosvm for example does device emulation in separate processes for
> > security reasons, so it is likely that adjusting the scheduling
> > parameters ('util_guest', uclamp, or else) only for the vCPU thread that
> > issues frequency requests will be sub-optimal for performance, we may
> > want to adjust those parameters for all the tasks that are on the
> > critical path.
> >
> > And at an even higher level, assuming in the kernel a certain mapping of
> > vCPU threads to host threads feels kinda wrong, this too is a host
> > userspace policy decision I believe. Not that anybody in their right
> > mind would want to do this, but I _think_ it would technically be
> > feasible to serialize the execution of multiple vCPUs on the same host
> > thread, at which point the util_guest thingy becomes entirely bogus. (I
> > obviously don't want to conflate this use-case, it's just an example
> > that shows the proposed abstraction in the series is not a perfect fit
> > for the KVM userspace delegation model.)
>
> See my reply to Oliver and Marc. To me it looks like we are converging
> towards having shared memory between guest, host kernel and VMM and
> that should address all our concerns.
Hmm, that is not at all my understanding of what has been the most
important part of the feedback so far: this whole thing belongs to
userspace.
> The guest will see a MMIO device, writing to it will trigger the host
> kernel to do the basic "set util_guest/uclamp for the vCPU thread that
> corresponds to the vCPU" and then the VMM can do more on top as/if
> needed (because it has access to the shared memory too). Does that
> make sense?
Not really no. I've given examples of why this doesn't make sense for
the kernel to do this, which still seems to be the case with what you're
suggesting here.
> Even in the extreme example, the stuff the kernel would do would still
> be helpful, but not sufficient. You can aggregate the
> util_guest/uclamp and do whatever from the VMM.
> Technically in the extreme example, you don't need any of this. The
> normal util tracking of the vCPU thread on the host side would be
> sufficient.
>
> Actually any time we have only 1 vCPU host thread per VM, we shouldn't
> be using anything in this patch series and not instantiate the guest
> device at all.
> > So +1 from me to move this as a virtual device of some kind. And if the
> > extra cost of exiting all the way back to userspace is prohibitive (is
> > it btw?),
>
> I think the "13% increase in battery consumption for games" makes it
> pretty clear that going to userspace is prohibitive. And that's just
> one example.
I beg to differ. We need to understand where these 13% come from in more
details. Is it really the actual cost of the userspace exit? Or is it
just that from userspace the only knob you can play with is uclamp and
that didn't reach the expected level of performance?
If that is the userspace exit, then we can work to optimize that -- it's
a fairly common problem in the virt world, nothing special here.
And if the issue is the lack of expressiveness in uclamp, then that too
is something we should work on, but clearly giving vCPU threads more
'power' than normal host threads is a bit of a red flag IMO. vCPU
threads must be constrained in the same way that userspace threads are,
because they _are_ userspace threads.
> > then we can try to work on that. Maybe something a la vhost
> > can be done to optimize, I'll have a think.
> >
> > > The one thing I'd like to understand that the comment seems to imply
> > > that there is a significant difference in overhead between a hypercall
> > > and an MMIO. In my experience, both are pretty similar in cost for a
> > > handling location (both in userspace or both in the kernel). MMIO
> > > handling is a tiny bit more expensive due to a guaranteed TLB miss
> > > followed by a walk of the in-kernel device ranges, but that's all. It
> > > should hardly register.
> > >
> > > And if you really want some super-low latency, low overhead
> > > signalling, maybe an exception is the wrong tool for the job. Shared
> > > memory communication could be more appropriate.
> >
> > I presume some kind of signalling mechanism will be necessary to
> > synchronously update host scheduling parameters in response to guest
> > frequency requests, but if the volume of data requires it then a shared
> > buffer + doorbell type of approach should do.
>
> Part of the communication doesn't need synchronous handling by the
> host. So, what I said above.
I've also replied to another message about the scale invariance issue,
and I'm not convinced the frequency based interface proposed here really
makes sense. An AMU-like interface is very likely to be superior.
> > Thinking about it, using SCMI over virtio would implement exactly that.
> > Linux-as-a-guest already supports it IIRC, so possibly the problem
> > being addressed in this series could be 'simply' solved using an SCMI
> > backend in the VMM...
>
> This will be worse than all the options we've tried so far because it
> has the userspace overhead AND uclamp overhead.
But it doesn't violate the whole KVM userspace delegation model, so we
should start from there and then optimize further if need be.
Thanks,
Quentin
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-06 12:52 ` Quentin Perret
@ 2023-04-06 21:39 ` David Dai
0 siblings, 0 replies; 27+ messages in thread
From: David Dai @ 2023-04-06 21:39 UTC (permalink / raw)
To: Quentin Perret
Cc: Saravana Kannan, Marc Zyngier, Oliver Upton, Rafael J. Wysocki,
Viresh Kumar, Rob Herring, Krzysztof Kozlowski, Paolo Bonzini,
Jonathan Corbet, James Morse, Suzuki K Poulose, Zenghui Yu,
Catalin Marinas, Will Deacon, Mark Rutland, Lorenzo Pieralisi,
Sudeep Holla, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Daniel Bristot de Oliveira, Valentin Schneider,
kernel-team, linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Thu, Apr 6, 2023 at 5:52 AM Quentin Perret <qperret@google.com> wrote:
>
> On Wednesday 05 Apr 2023 at 14:07:18 (-0700), Saravana Kannan wrote:
> > On Wed, Apr 5, 2023 at 12:48 AM 'Quentin Perret' via kernel-team
> > > And I concur with all the above as well. Putting this in the kernel is
> > > not an obvious fit at all as that requires a number of assumptions about
> > > the VMM.
> > >
> > > As Oliver pointed out, the guest topology, and how it maps to the host
> > > topology (vcpu pinning etc) is very much a VMM policy decision and will
> > > be particularly important to handle guest frequency requests correctly.
> > >
> > > In addition to that, the VMM's software architecture may have an impact.
> > > Crosvm for example does device emulation in separate processes for
> > > security reasons, so it is likely that adjusting the scheduling
> > > parameters ('util_guest', uclamp, or else) only for the vCPU thread that
> > > issues frequency requests will be sub-optimal for performance, we may
> > > want to adjust those parameters for all the tasks that are on the
> > > critical path.
> > >
> > > And at an even higher level, assuming in the kernel a certain mapping of
> > > vCPU threads to host threads feels kinda wrong, this too is a host
> > > userspace policy decision I believe. Not that anybody in their right
> > > mind would want to do this, but I _think_ it would technically be
> > > feasible to serialize the execution of multiple vCPUs on the same host
> > > thread, at which point the util_guest thingy becomes entirely bogus. (I
> > > obviously don't want to conflate this use-case, it's just an example
> > > that shows the proposed abstraction in the series is not a perfect fit
> > > for the KVM userspace delegation model.)
> >
> > See my reply to Oliver and Marc. To me it looks like we are converging
> > towards having shared memory between guest, host kernel and VMM and
> > that should address all our concerns.
>
> Hmm, that is not at all my understanding of what has been the most
> important part of the feedback so far: this whole thing belongs to
> userspace.
>
> > The guest will see a MMIO device, writing to it will trigger the host
> > kernel to do the basic "set util_guest/uclamp for the vCPU thread that
> > corresponds to the vCPU" and then the VMM can do more on top as/if
> > needed (because it has access to the shared memory too). Does that
> > make sense?
>
> Not really no. I've given examples of why this doesn't make sense for
> the kernel to do this, which still seems to be the case with what you're
> suggesting here.
>
> > Even in the extreme example, the stuff the kernel would do would still
> > be helpful, but not sufficient. You can aggregate the
> > util_guest/uclamp and do whatever from the VMM.
> > Technically in the extreme example, you don't need any of this. The
> > normal util tracking of the vCPU thread on the host side would be
> > sufficient.
> >
> > Actually any time we have only 1 vCPU host thread per VM, we shouldn't
> > be using anything in this patch series and not instantiate the guest
> > device at all.
>
> > > So +1 from me to move this as a virtual device of some kind. And if the
> > > extra cost of exiting all the way back to userspace is prohibitive (is
> > > it btw?),
> >
> > I think the "13% increase in battery consumption for games" makes it
> > pretty clear that going to userspace is prohibitive. And that's just
> > one example.
>
Hi Quentin,
Appreciate the feedback,
> I beg to differ. We need to understand where these 13% come from in more
> details. Is it really the actual cost of the userspace exit? Or is it
> just that from userspace the only knob you can play with is uclamp and
> that didn't reach the expected level of performance?
To clarify, the MMIO numbers shown in the cover letter were collected
with updating vCPU task's util_guest as opposed to uclamp_min. In that
configuration, userspace(VMM) handles the mmio_exit from the guest and
makes an ioctl on the host kernel to update util_guest for the vCPU
task.
>
> If that is the userspace exit, then we can work to optimize that -- it's
> a fairly common problem in the virt world, nothing special here.
>
Ok, we're open to suggestions on how to better optimize here.
> And if the issue is the lack of expressiveness in uclamp, then that too
> is something we should work on, but clearly giving vCPU threads more
> 'power' than normal host threads is a bit of a red flag IMO. vCPU
> threads must be constrained in the same way that userspace threads are,
> because they _are_ userspace threads.
>
> > > then we can try to work on that. Maybe something a la vhost
> > > can be done to optimize, I'll have a think.
> > >
> > > > The one thing I'd like to understand that the comment seems to imply
> > > > that there is a significant difference in overhead between a hypercall
> > > > and an MMIO. In my experience, both are pretty similar in cost for a
> > > > handling location (both in userspace or both in the kernel). MMIO
> > > > handling is a tiny bit more expensive due to a guaranteed TLB miss
> > > > followed by a walk of the in-kernel device ranges, but that's all. It
> > > > should hardly register.
> > > >
> > > > And if you really want some super-low latency, low overhead
> > > > signalling, maybe an exception is the wrong tool for the job. Shared
> > > > memory communication could be more appropriate.
> > >
> > > I presume some kind of signalling mechanism will be necessary to
> > > synchronously update host scheduling parameters in response to guest
> > > frequency requests, but if the volume of data requires it then a shared
> > > buffer + doorbell type of approach should do.
> >
> > Part of the communication doesn't need synchronous handling by the
> > host. So, what I said above.
>
> I've also replied to another message about the scale invariance issue,
> and I'm not convinced the frequency based interface proposed here really
> makes sense. An AMU-like interface is very likely to be superior.
>
Some sort of AMU-based interface was discussed offline with Saravana,
but I'm not sure how to best implement that. If you have any pointers
to get started, that would be helpful.
> > > Thinking about it, using SCMI over virtio would implement exactly that.
> > > Linux-as-a-guest already supports it IIRC, so possibly the problem
> > > being addressed in this series could be 'simply' solved using an SCMI
> > > backend in the VMM...
> >
> > This will be worse than all the options we've tried so far because it
> > has the userspace overhead AND uclamp overhead.
>
> But it doesn't violate the whole KVM userspace delegation model, so we
> should start from there and then optimize further if need be.
Do you have any references we can experiment with getting started for
SCMI? (ex. SCMI backend support in CrosVM).
For RFC V3, I'll post a CPUfreq driver implementation that only uses
MMIO and without any kernel host modifications(I.E. Only using uclamp
as a knob to tune the host) along with performance numbers and then
work on optimizing from there.
Thanks,
David
>
> Thanks,
> Quentin
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-03-30 22:43 [RFC PATCH 0/6] Improve VM DVFS and task placement behavior David Dai
` (6 preceding siblings ...)
2023-04-05 8:05 ` Peter Zijlstra
@ 2023-04-27 7:46 ` Pavan Kondeti
2023-04-27 9:52 ` Gupta, Pankaj
7 siblings, 1 reply; 27+ messages in thread
From: Pavan Kondeti @ 2023-04-27 7:46 UTC (permalink / raw)
To: David Dai, Saravana Kannan
Cc: Rafael J. Wysocki, Viresh Kumar, Rob Herring, Krzysztof Kozlowski,
Paolo Bonzini, Jonathan Corbet, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> Hi,
>
> This patch series is a continuation of the talk Saravana gave at LPC 2022
> titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
> of the talk is that workloads running in a guest VM get terrible task
> placement and DVFS behavior when compared to running the same workload in
> the host. Effectively, no EAS for threads inside VMs. This would make power
> and performance terrible just by running the workload in a VM even if we
> assume there is zero virtualization overhead.
>
> We have been iterating over different options for communicating between
> guest and host, ways of applying the information coming from the
> guest/host, etc to figure out the best performance and power improvements
> we could get.
>
> The patch series in its current state is NOT meant for landing in the
> upstream kernel. We are sending this patch series to share the current
> progress and data we have so far. The patch series is meant to be easy to
> cherry-pick and test on various devices to see what performance and power
> benefits this might give for others.
>
> With this series, a workload running in a VM gets the same task placement
> and DVFS treatment as it would when running in the host.
>
> As expected, we see significant performance improvement and better
> performance/power ratio. If anyone else wants to try this out for your VM
> workloads and report findings, that'd be very much appreciated.
>
> The idea is to improve VM CPUfreq/sched behavior by:
> - Having guest kernel to do accurate load tracking by taking host CPU
> arch/type and frequency into account.
> - Sharing vCPU run queue utilization information with the host so that the
> host can do proper frequency scaling and task placement on the host side.
>
[...]
>
> Next steps:
> ===========
> We are continuing to look into communication mechanisms other than
> hypercalls that are just as/more efficient and avoid switching into the VMM
> userspace. Any inputs in this regard are greatly appreciated.
>
I am trying to understand why virtio based cpufrq does not work here?
The VMM on host can process requests from guest VM like freq table,
current frequency and setting the min_freq. I believe Virtio backend
has mechanisms for acceleration (vhost) so that user space is not
involved for every frequency request from the guest.
It has advantages of (1) Hypervisor agnostic (virtio basically)
(2) scheduler does not need additional input, the aggregated min_freq
requests from all guest should be sufficient.
>
> [1] - https://lpc.events/event/16/contributions/1195/
> [2] - https://lpc.events/event/16/contributions/1195/attachments/970/1893/LPC%202022%20-%20VM%20DVFS.pdf
> [3] - https://www.youtube.com/watch?v=hIg_5bg6opU
> [4] - https://chromium-review.googlesource.com/c/crosvm/crosvm/+/4208668
> [5] - https://chromium-review.googlesource.com/c/crosvm/crosvm/+/4288027
>
> David Dai (6):
> sched/fair: Add util_guest for tasks
> kvm: arm64: Add support for get_cur_cpufreq service
> kvm: arm64: Add support for util_hint service
> kvm: arm64: Add support for get_freqtbl service
> dt-bindings: cpufreq: add bindings for virtual kvm cpufreq
> cpufreq: add kvm-cpufreq driver
>
> .../bindings/cpufreq/cpufreq-virtual-kvm.yaml | 39 +++
> Documentation/virt/kvm/api.rst | 28 ++
> .../virt/kvm/arm/get_cur_cpufreq.rst | 21 ++
> Documentation/virt/kvm/arm/get_freqtbl.rst | 23 ++
> Documentation/virt/kvm/arm/index.rst | 3 +
> Documentation/virt/kvm/arm/util_hint.rst | 22 ++
> arch/arm64/include/uapi/asm/kvm.h | 3 +
> arch/arm64/kvm/arm.c | 3 +
> arch/arm64/kvm/hypercalls.c | 60 +++++
> drivers/cpufreq/Kconfig | 13 +
> drivers/cpufreq/Makefile | 1 +
> drivers/cpufreq/kvm-cpufreq.c | 245 ++++++++++++++++++
> include/linux/arm-smccc.h | 21 ++
> include/linux/sched.h | 12 +
> include/uapi/linux/kvm.h | 3 +
> kernel/sched/core.c | 24 +-
> kernel/sched/fair.c | 15 +-
> tools/arch/arm64/include/uapi/asm/kvm.h | 3 +
> 18 files changed, 536 insertions(+), 3 deletions(-)
> create mode 100644 Documentation/devicetree/bindings/cpufreq/cpufreq-virtual-kvm.yaml
> create mode 100644 Documentation/virt/kvm/arm/get_cur_cpufreq.rst
> create mode 100644 Documentation/virt/kvm/arm/get_freqtbl.rst
> create mode 100644 Documentation/virt/kvm/arm/util_hint.rst
> create mode 100644 drivers/cpufreq/kvm-cpufreq.c
Thanks,
Pavan
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-27 7:46 ` Pavan Kondeti
@ 2023-04-27 9:52 ` Gupta, Pankaj
2023-04-27 11:26 ` Pavan Kondeti
0 siblings, 1 reply; 27+ messages in thread
From: Gupta, Pankaj @ 2023-04-27 9:52 UTC (permalink / raw)
To: Pavan Kondeti, David Dai, Saravana Kannan
Cc: Rafael J. Wysocki, Viresh Kumar, Rob Herring, Krzysztof Kozlowski,
Paolo Bonzini, Jonathan Corbet, Marc Zyngier, Oliver Upton,
James Morse, Suzuki K Poulose, Zenghui Yu, Catalin Marinas,
Will Deacon, Mark Rutland, Lorenzo Pieralisi, Sudeep Holla,
Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
>> This patch series is a continuation of the talk Saravana gave at LPC 2022
>> titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
>> of the talk is that workloads running in a guest VM get terrible task
>> placement and DVFS behavior when compared to running the same workload in
>> the host. Effectively, no EAS for threads inside VMs. This would make power
>> and performance terrible just by running the workload in a VM even if we
>> assume there is zero virtualization overhead.
>>
>> We have been iterating over different options for communicating between
>> guest and host, ways of applying the information coming from the
>> guest/host, etc to figure out the best performance and power improvements
>> we could get.
>>
>> The patch series in its current state is NOT meant for landing in the
>> upstream kernel. We are sending this patch series to share the current
>> progress and data we have so far. The patch series is meant to be easy to
>> cherry-pick and test on various devices to see what performance and power
>> benefits this might give for others.
>>
>> With this series, a workload running in a VM gets the same task placement
>> and DVFS treatment as it would when running in the host.
>>
>> As expected, we see significant performance improvement and better
>> performance/power ratio. If anyone else wants to try this out for your VM
>> workloads and report findings, that'd be very much appreciated.
>>
>> The idea is to improve VM CPUfreq/sched behavior by:
>> - Having guest kernel to do accurate load tracking by taking host CPU
>> arch/type and frequency into account.
>> - Sharing vCPU run queue utilization information with the host so that the
>> host can do proper frequency scaling and task placement on the host side.
>>
>
> [...]
>
>>
>> Next steps:
>> ===========
>> We are continuing to look into communication mechanisms other than
>> hypercalls that are just as/more efficient and avoid switching into the VMM
>> userspace. Any inputs in this regard are greatly appreciated.
>>
>
> I am trying to understand why virtio based cpufrq does not work here?
> The VMM on host can process requests from guest VM like freq table,
> current frequency and setting the min_freq. I believe Virtio backend
> has mechanisms for acceleration (vhost) so that user space is not
> involved for every frequency request from the guest.
>
> It has advantages of (1) Hypervisor agnostic (virtio basically)
> (2) scheduler does not need additional input, the aggregated min_freq
> requests from all guest should be sufficient.
Also want to add, 3) virtio based solution would definitely be better
from performance POV as would avoid expense vmexits which we have with
hypercalls.
Thanks,
Pankaj
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior
2023-04-27 9:52 ` Gupta, Pankaj
@ 2023-04-27 11:26 ` Pavan Kondeti
0 siblings, 0 replies; 27+ messages in thread
From: Pavan Kondeti @ 2023-04-27 11:26 UTC (permalink / raw)
To: Gupta, Pankaj
Cc: Pavan Kondeti, David Dai, Saravana Kannan, Rafael J. Wysocki,
Viresh Kumar, Rob Herring, Krzysztof Kozlowski, Paolo Bonzini,
Jonathan Corbet, Marc Zyngier, Oliver Upton, James Morse,
Suzuki K Poulose, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Rutland, Lorenzo Pieralisi, Sudeep Holla, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman,
Daniel Bristot de Oliveira, Valentin Schneider, kernel-team,
linux-pm, devicetree, linux-kernel, kvm, linux-doc,
linux-arm-kernel, kvmarm
On Thu, Apr 27, 2023 at 11:52:29AM +0200, Gupta, Pankaj wrote:
>
> > > This patch series is a continuation of the talk Saravana gave at LPC 2022
> > > titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
> > > of the talk is that workloads running in a guest VM get terrible task
> > > placement and DVFS behavior when compared to running the same workload in
> > > the host. Effectively, no EAS for threads inside VMs. This would make power
> > > and performance terrible just by running the workload in a VM even if we
> > > assume there is zero virtualization overhead.
> > >
> > > We have been iterating over different options for communicating between
> > > guest and host, ways of applying the information coming from the
> > > guest/host, etc to figure out the best performance and power improvements
> > > we could get.
> > >
> > > The patch series in its current state is NOT meant for landing in the
> > > upstream kernel. We are sending this patch series to share the current
> > > progress and data we have so far. The patch series is meant to be easy to
> > > cherry-pick and test on various devices to see what performance and power
> > > benefits this might give for others.
> > >
> > > With this series, a workload running in a VM gets the same task placement
> > > and DVFS treatment as it would when running in the host.
> > >
> > > As expected, we see significant performance improvement and better
> > > performance/power ratio. If anyone else wants to try this out for your VM
> > > workloads and report findings, that'd be very much appreciated.
> > >
> > > The idea is to improve VM CPUfreq/sched behavior by:
> > > - Having guest kernel to do accurate load tracking by taking host CPU
> > > arch/type and frequency into account.
> > > - Sharing vCPU run queue utilization information with the host so that the
> > > host can do proper frequency scaling and task placement on the host side.
> > >
> >
> > [...]
> >
> > >
> > > Next steps:
> > > ===========
> > > We are continuing to look into communication mechanisms other than
> > > hypercalls that are just as/more efficient and avoid switching into the VMM
> > > userspace. Any inputs in this regard are greatly appreciated.
> > >
> >
> > I am trying to understand why virtio based cpufrq does not work here?
> > The VMM on host can process requests from guest VM like freq table,
> > current frequency and setting the min_freq. I believe Virtio backend
> > has mechanisms for acceleration (vhost) so that user space is not
> > involved for every frequency request from the guest.
> >
> > It has advantages of (1) Hypervisor agnostic (virtio basically)
> > (2) scheduler does not need additional input, the aggregated min_freq
> > requests from all guest should be sufficient.
>
> Also want to add, 3) virtio based solution would definitely be better from
> performance POV as would avoid expense vmexits which we have with
> hypercalls.
>
>
I just went through the whole discussion, it seems David mentioned he would
re-write this series with virtio frontend and VMM in user space taking
care of the requests. will wait for that series to land.
Thanks,
Pavan
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2023-04-27 11:28 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-30 22:43 [RFC PATCH 0/6] Improve VM DVFS and task placement behavior David Dai
2023-03-30 22:43 ` [RFC PATCH 2/6] kvm: arm64: Add support for get_cur_cpufreq service David Dai
2023-04-05 8:04 ` Quentin Perret
2023-03-30 22:43 ` [RFC PATCH 3/6] kvm: arm64: Add support for util_hint service David Dai
2023-03-30 22:43 ` [RFC PATCH 4/6] kvm: arm64: Add support for get_freqtbl service David Dai
2023-03-30 23:20 ` [RFC PATCH 0/6] Improve VM DVFS and task placement behavior Oliver Upton
2023-03-30 23:36 ` Saravana Kannan
2023-03-30 23:40 ` Oliver Upton
2023-03-31 0:34 ` Saravana Kannan
2023-03-31 0:49 ` Matthew Wilcox
2023-04-03 10:18 ` Mel Gorman
2023-04-04 19:43 ` Oliver Upton
2023-04-04 20:49 ` Marc Zyngier
2023-04-05 7:48 ` Quentin Perret
2023-04-05 8:33 ` Vincent Guittot
2023-04-05 21:07 ` Saravana Kannan
2023-04-06 12:52 ` Quentin Perret
2023-04-06 21:39 ` David Dai
2023-04-05 21:00 ` Saravana Kannan
2023-04-06 8:42 ` Marc Zyngier
2023-04-05 8:05 ` Peter Zijlstra
2023-04-05 21:08 ` Saravana Kannan
2023-04-06 7:36 ` Peter Zijlstra
2023-04-06 7:38 ` Peter Zijlstra
2023-04-27 7:46 ` Pavan Kondeti
2023-04-27 9:52 ` Gupta, Pankaj
2023-04-27 11:26 ` Pavan Kondeti
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).