* [PATCH 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter
2022-05-23 21:41 [PATCH 0/4] kvm: x86/pmu: Introduce and test masked events Aaron Lewis
@ 2022-05-23 21:41 ` Aaron Lewis
2022-05-24 6:04 ` kernel test robot
` (2 more replies)
2022-05-23 21:41 ` [PATCH 2/4] selftests: kvm/x86: Add flags when creating a " Aaron Lewis
` (2 subsequent siblings)
3 siblings, 3 replies; 8+ messages in thread
From: Aaron Lewis @ 2022-05-23 21:41 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
When building an event list for the pmu event filter, fitting all the
events in the limited space can be a challenge. It becomes
particularly challenging when trying to include various unit mask
combinations for a particular event the guest is allow to or not allow
to program. Instead of increasing the size of the list to allow for
these, add a new encoding in the pmu event filter's events field. These
encoded events can then be used to test against the event the guest is
attempting to program to determine if the guest should have access to
it.
The encoded values are: mask, match, and invert. When filtering events
the mask is applied to the guest's unit mask to see if it matches the
match value (ie: unit_mask & mask == match). If it does and the pmu
event filter is an allow list the event is allowed, and denied if it's
a deny list. Additionally, the result is reversed if the invert flag
is set in the encoded event.
This feature is enabled by setting the flags field to
KVM_PMU_EVENT_FLAG_MASKED_EVENTS.
Events can be encoded by using KVM_PMU_EVENT_ENCODE_MASKED_EVENT().
It is an error to have a bit set outside valid encoded bits, and calls
to KVM_SET_PMU_EVENT_FILTER will return -EINVAL in such cases,
including bits that are set in the high nybble[1] for AMD if called on
Intel.
[1] bits 35:32 in the event and bits 11:8 in the eventsel.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: I64a0d54f0215eb09f3bb9ecae5c2a6dbcec32f93
---
Documentation/virt/kvm/api.rst | 46 ++++++++++--
arch/x86/include/uapi/asm/kvm.h | 8 ++
arch/x86/kvm/pmu.c | 128 +++++++++++++++++++++++++++++---
arch/x86/kvm/pmu.h | 1 +
arch/x86/kvm/svm/pmu.c | 12 +++
arch/x86/kvm/vmx/pmu_intel.c | 12 +++
6 files changed, 189 insertions(+), 18 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 4a900cdbc62e..671c0bb06eb5 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -4951,7 +4951,13 @@ using this ioctl.
:Architectures: x86
:Type: vm ioctl
:Parameters: struct kvm_pmu_event_filter (in)
-:Returns: 0 on success, -1 on error
+:Returns: 0 on success,
+ -EFAULT args[0] cannot be accessed.
+ -EINVAL args[0] contains invalid data in the filter or events field.
+ Note: event validation is only done for modes where
+ the flags field is non-zero.
+ -E2BIG nevents is too large.
+ -ENOMEM not enough memory to allocate the filter.
::
@@ -4964,14 +4970,42 @@ using this ioctl.
__u64 events[0];
};
-This ioctl restricts the set of PMU events that the guest can program.
-The argument holds a list of events which will be allowed or denied.
-The eventsel+umask of each event the guest attempts to program is compared
-against the events field to determine whether the guest should have access.
+This ioctl restricts the set of PMU events the guest can program. The
+argument holds a list of events which will be allowed or denied.
+
The events field only controls general purpose counters; fixed purpose
counters are controlled by the fixed_counter_bitmap.
-No flags are defined yet, the field must be zero.
+Valid values for 'flags'::
+
+``0``
+
+This is the default behavior for the pmu event filter, and used when the
+flags field is clear. In this mode the eventsel+umask for the event the
+guest is attempting to program is compared against each event in the events
+field to determine whether the guest should have access to it.
+
+``KVM_PMU_EVENT_FLAG_MASKED_EVENTS``
+
+In this mode each event in the events field will be encoded with mask, match,
+and invert values in addition to an eventsel. These encoded events will be
+matched against the event the guest is attempting to program to determine
+whether the guest should have access to it. When matching an encoded event
+with a guest event these steps are followed:
+ 1. Match the encoded eventsel to the guest eventsel.
+ 2. If that matches, match the mask and match values from the encoded event to
+ the guest's unit mask (ie: unit_mask & mask == match).
+ 3. If that matches, the guest is allow to program the event if its an allow
+ list or the guest is not allow to program the event if its a deny list.
+ 4. If the invert value is set in the encoded event, reverse the meaning of #3
+ (ie: deny if its an allow list, allow if it's a deny list).
+
+To encode an event in the pmu_event_filter use
+KVM_PMU_EVENT_ENCODE_MASKED_EVENT().
+
+If a bit is set in an encoded event that is not apart of the bits used for
+eventsel, mask, match or invert a call to KVM_SET_PMU_EVENT_FILTER will
+return -EINVAL.
Valid values for 'action'::
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index bf6e96011dfe..850af8ee724f 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -521,6 +521,14 @@ struct kvm_pmu_event_filter {
#define KVM_PMU_EVENT_ALLOW 0
#define KVM_PMU_EVENT_DENY 1
+#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS (1u << 0)
+
+#define KVM_PMU_EVENT_ENCODE_MASKED_EVENT(select, mask, match, invert) \
+ ((select) & 0xfful) | (((select) & 0xf00ul) << 24) | \
+ (((mask) & 0xfful) << 24) | \
+ (((match) & 0xfful) << 8) | \
+ (((invert) & 0x1ul) << 23)
+
/* for KVM_{GET,SET,HAS}_DEVICE_ATTR */
#define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */
#define KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 0604bc29f0b8..c2a9d7841922 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -171,14 +171,99 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
return true;
}
-static int cmp_u64(const void *pa, const void *pb)
+static inline u64 get_event(u64 eventsel)
{
- u64 a = *(u64 *)pa;
- u64 b = *(u64 *)pb;
+ return eventsel & AMD64_EVENTSEL_EVENT;
+}
+static inline u8 get_unit_mask(u64 eventsel)
+{
+ return (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
+}
+
+static inline u8 get_counter_mask(u64 eventsel)
+{
+ return (eventsel & ARCH_PERFMON_EVENTSEL_CMASK) >> 24;
+}
+
+static inline bool get_invert_comparison(u64 eventsel)
+{
+ return !!(eventsel & ARCH_PERFMON_EVENTSEL_INV);
+}
+
+static inline int cmp_safe64(u64 a, u64 b)
+{
return (a > b) - (a < b);
}
+static int cmp_eventsel_event(const void *pa, const void *pb)
+{
+ return cmp_safe64(*(u64*)pa & AMD64_EVENTSEL_EVENT,
+ *(u64*)pb & AMD64_EVENTSEL_EVENT);
+}
+
+static int cmp_u64(const void *pa, const void *pb)
+{
+ return cmp_safe64(*(u64 *)pa,
+ *(u64 *)pb);
+}
+
+static bool is_match(u64 masked_event, u64 eventsel)
+{
+ u8 mask = get_counter_mask(masked_event);
+ u8 match = get_unit_mask(masked_event);
+ u8 unit_mask = get_unit_mask(eventsel);
+
+ return (unit_mask & mask) == match;
+}
+
+static bool is_event_allowed(u64 masked_event, u32 action)
+{
+ if (get_invert_comparison(masked_event))
+ return action != KVM_PMU_EVENT_ALLOW;
+
+ return action == KVM_PMU_EVENT_ALLOW;
+}
+
+static bool filter_masked_event(struct kvm_pmu_event_filter *filter,
+ u64 eventsel)
+{
+ u64 key = get_event(eventsel);
+ u64 *event, *evt;
+
+ event = bsearch(&key, filter->events, filter->nevents, sizeof(u64),
+ cmp_eventsel_event);
+
+ if(event) {
+ /* Walk the masked events backward looking for a match. */
+ for (evt = event; evt >= filter->events &&
+ get_event(*evt) == get_event(eventsel); evt--)
+ if (is_match(*evt, eventsel))
+ return is_event_allowed(*evt, filter->action);
+
+ /* Walk the masked events forward looking for a match. */
+ for (evt = event + 1;
+ evt < (filter->events + filter->nevents) &&
+ get_event(*evt) == get_event(eventsel); evt++)
+ if(is_match(*evt, eventsel))
+ return is_event_allowed(*evt, filter->action);
+ }
+
+ return filter->action == KVM_PMU_EVENT_DENY;
+}
+
+static bool filter_default_event(struct kvm_pmu_event_filter *filter,
+ u64 eventsel)
+{
+ u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB;
+
+ if (bsearch(&key, filter->events, filter->nevents,
+ sizeof(u64), cmp_u64))
+ return filter->action == KVM_PMU_EVENT_ALLOW;
+
+ return filter->action == KVM_PMU_EVENT_DENY;
+}
+
void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
{
u64 config;
@@ -200,14 +285,11 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu);
if (filter) {
- __u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB;
-
- if (bsearch(&key, filter->events, filter->nevents,
- sizeof(__u64), cmp_u64))
- allow_event = filter->action == KVM_PMU_EVENT_ALLOW;
- else
- allow_event = filter->action == KVM_PMU_EVENT_DENY;
+ allow_event = (filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) ?
+ filter_masked_event(filter, eventsel) :
+ filter_default_event(filter, eventsel);
}
+
if (!allow_event)
return;
@@ -548,8 +630,22 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id)
}
EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event);
+int has_invalid_event(struct kvm_pmu_event_filter *filter)
+{
+ u64 event_mask;
+ int i;
+
+ event_mask = kvm_x86_ops.pmu_ops->get_event_mask(filter->flags);
+ for(i = 0; i < filter->nevents; i++)
+ if (filter->events[i] & ~event_mask)
+ return true;
+
+ return false;
+}
+
int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
{
+ int (*cmp)(const void *a, const void *b) = cmp_u64;
struct kvm_pmu_event_filter tmp, *filter;
size_t size;
int r;
@@ -561,7 +657,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
tmp.action != KVM_PMU_EVENT_DENY)
return -EINVAL;
- if (tmp.flags != 0)
+ if (tmp.flags & ~KVM_PMU_EVENT_FLAG_MASKED_EVENTS)
return -EINVAL;
if (tmp.nevents > KVM_PMU_EVENT_FILTER_MAX_EVENTS)
@@ -579,10 +675,18 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
/* Ensure nevents can't be changed between the user copies. */
*filter = tmp;
+ r = -EINVAL;
+ /* To maintain backwards compatibility don't validate flags == 0. */
+ if (filter->flags != 0 && has_invalid_event(filter))
+ goto cleanup;
+
+ if (filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS)
+ cmp = cmp_eventsel_event;
+
/*
* Sort the in-kernel list so that we can search it with bsearch.
*/
- sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL);
+ sort(&filter->events, filter->nevents, sizeof(u64), cmp, NULL);
mutex_lock(&kvm->lock);
filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter,
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 22992b049d38..7a0c2ee9f121 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -37,6 +37,7 @@ struct kvm_pmu_ops {
void (*reset)(struct kvm_vcpu *vcpu);
void (*deliver_pmi)(struct kvm_vcpu *vcpu);
void (*cleanup)(struct kvm_vcpu *vcpu);
+ u64 (*get_event_mask)(u32 flag);
};
static inline u64 pmc_bitmask(struct kvm_pmc *pmc)
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index 16a5ebb420cf..0cc66aa2d99a 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -342,6 +342,17 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu)
}
}
+static u64 amd_pmu_get_event_mask(u32 flag)
+{
+ if (flag == KVM_PMU_EVENT_FLAG_MASKED_EVENTS)
+ return AMD64_EVENTSEL_EVENT |
+ ARCH_PERFMON_EVENTSEL_UMASK |
+ ARCH_PERFMON_EVENTSEL_INV |
+ ARCH_PERFMON_EVENTSEL_CMASK;
+ return AMD64_EVENTSEL_EVENT |
+ ARCH_PERFMON_EVENTSEL_UMASK;
+}
+
struct kvm_pmu_ops amd_pmu_ops = {
.pmc_perf_hw_id = amd_pmc_perf_hw_id,
.pmc_is_enabled = amd_pmc_is_enabled,
@@ -355,4 +366,5 @@ struct kvm_pmu_ops amd_pmu_ops = {
.refresh = amd_pmu_refresh,
.init = amd_pmu_init,
.reset = amd_pmu_reset,
+ .get_event_mask = amd_pmu_get_event_mask,
};
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index b82b6709d7a8..6efddb1a8d9d 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -719,6 +719,17 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu)
intel_pmu_release_guest_lbr_event(vcpu);
}
+static u64 intel_pmu_get_event_mask(u32 flag)
+{
+ if (flag == KVM_PMU_EVENT_FLAG_MASKED_EVENTS)
+ return ARCH_PERFMON_EVENTSEL_EVENT |
+ ARCH_PERFMON_EVENTSEL_UMASK |
+ ARCH_PERFMON_EVENTSEL_INV |
+ ARCH_PERFMON_EVENTSEL_CMASK;
+ return ARCH_PERFMON_EVENTSEL_EVENT |
+ ARCH_PERFMON_EVENTSEL_UMASK;
+}
+
struct kvm_pmu_ops intel_pmu_ops = {
.pmc_perf_hw_id = intel_pmc_perf_hw_id,
.pmc_is_enabled = intel_pmc_is_enabled,
@@ -734,4 +745,5 @@ struct kvm_pmu_ops intel_pmu_ops = {
.reset = intel_pmu_reset,
.deliver_pmi = intel_pmu_deliver_pmi,
.cleanup = intel_pmu_cleanup,
+ .get_event_mask = intel_pmu_get_event_mask,
};
--
2.36.1.124.g0e6072fb45-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter
2022-05-23 21:41 ` [PATCH 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter Aaron Lewis
@ 2022-05-24 6:04 ` kernel test robot
2022-05-24 18:27 ` kernel test robot
2022-05-24 23:17 ` kernel test robot
2 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2022-05-24 6:04 UTC (permalink / raw)
To: Aaron Lewis, kvm
Cc: llvm, kbuild-all, pbonzini, jmattson, seanjc, Aaron Lewis
Hi Aaron,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on kvm/master]
[also build test WARNING on v5.18]
[cannot apply to mst-vhost/linux-next next-20220523]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/intel-lab-lkp/linux/commits/Aaron-Lewis/kvm-x86-pmu-Introduce-and-test-masked-events/20220524-054438
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git master
config: i386-randconfig-a011 (https://download.01.org/0day-ci/archive/20220524/202205241319.3YATdncf-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 10c9ecce9f6096e18222a331c5e7d085bd813f75)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/f189a455a73825b7025d8feff486db18ebef171f
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Aaron-Lewis/kvm-x86-pmu-Introduce-and-test-masked-events/20220524-054438
git checkout f189a455a73825b7025d8feff486db18ebef171f
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash arch/x86/kvm/
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
>> arch/x86/kvm/pmu.c:633:5: warning: no previous prototype for function 'has_invalid_event' [-Wmissing-prototypes]
int has_invalid_event(struct kvm_pmu_event_filter *filter)
^
arch/x86/kvm/pmu.c:633:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int has_invalid_event(struct kvm_pmu_event_filter *filter)
^
static
1 warning generated.
vim +/has_invalid_event +633 arch/x86/kvm/pmu.c
632
> 633 int has_invalid_event(struct kvm_pmu_event_filter *filter)
634 {
635 u64 event_mask;
636 int i;
637
638 event_mask = kvm_x86_ops.pmu_ops->get_event_mask(filter->flags);
639 for(i = 0; i < filter->nevents; i++)
640 if (filter->events[i] & ~event_mask)
641 return true;
642
643 return false;
644 }
645
--
0-DAY CI Kernel Test Service
https://01.org/lkp
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter
2022-05-23 21:41 ` [PATCH 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter Aaron Lewis
2022-05-24 6:04 ` kernel test robot
@ 2022-05-24 18:27 ` kernel test robot
2022-05-24 23:17 ` kernel test robot
2 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2022-05-24 18:27 UTC (permalink / raw)
To: Aaron Lewis, kvm; +Cc: kbuild-all, pbonzini, jmattson, seanjc, Aaron Lewis
Hi Aaron,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on kvm/master]
[also build test ERROR on v5.18]
[cannot apply to mst-vhost/linux-next next-20220524]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/intel-lab-lkp/linux/commits/Aaron-Lewis/kvm-x86-pmu-Introduce-and-test-masked-events/20220524-054438
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git master
config: i386-randconfig-a004-20211129 (https://download.01.org/0day-ci/archive/20220525/202205250255.HGMufiYY-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-1) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/f189a455a73825b7025d8feff486db18ebef171f
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Aaron-Lewis/kvm-x86-pmu-Introduce-and-test-masked-events/20220524-054438
git checkout f189a455a73825b7025d8feff486db18ebef171f
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash arch/x86/kvm/
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
>> arch/x86/kvm/pmu.c:633:5: error: no previous prototype for 'has_invalid_event' [-Werror=missing-prototypes]
633 | int has_invalid_event(struct kvm_pmu_event_filter *filter)
| ^~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
vim +/has_invalid_event +633 arch/x86/kvm/pmu.c
632
> 633 int has_invalid_event(struct kvm_pmu_event_filter *filter)
634 {
635 u64 event_mask;
636 int i;
637
638 event_mask = kvm_x86_ops.pmu_ops->get_event_mask(filter->flags);
639 for(i = 0; i < filter->nevents; i++)
640 if (filter->events[i] & ~event_mask)
641 return true;
642
643 return false;
644 }
645
--
0-DAY CI Kernel Test Service
https://01.org/lkp
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter
2022-05-23 21:41 ` [PATCH 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter Aaron Lewis
2022-05-24 6:04 ` kernel test robot
2022-05-24 18:27 ` kernel test robot
@ 2022-05-24 23:17 ` kernel test robot
2 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2022-05-24 23:17 UTC (permalink / raw)
To: Aaron Lewis, kvm
Cc: llvm, kbuild-all, pbonzini, jmattson, seanjc, Aaron Lewis
Hi Aaron,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on kvm/master]
[also build test ERROR on v5.18]
[cannot apply to mst-vhost/linux-next next-20220524]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/intel-lab-lkp/linux/commits/Aaron-Lewis/kvm-x86-pmu-Introduce-and-test-masked-events/20220524-054438
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git master
config: i386-randconfig-a002 (https://download.01.org/0day-ci/archive/20220525/202205250725.AFgtAciB-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 10c9ecce9f6096e18222a331c5e7d085bd813f75)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/f189a455a73825b7025d8feff486db18ebef171f
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Aaron-Lewis/kvm-x86-pmu-Introduce-and-test-masked-events/20220524-054438
git checkout f189a455a73825b7025d8feff486db18ebef171f
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
>> arch/x86/kvm/pmu.c:633:5: error: no previous prototype for function 'has_invalid_event' [-Werror,-Wmissing-prototypes]
int has_invalid_event(struct kvm_pmu_event_filter *filter)
^
arch/x86/kvm/pmu.c:633:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int has_invalid_event(struct kvm_pmu_event_filter *filter)
^
static
1 error generated.
vim +/has_invalid_event +633 arch/x86/kvm/pmu.c
632
> 633 int has_invalid_event(struct kvm_pmu_event_filter *filter)
634 {
635 u64 event_mask;
636 int i;
637
638 event_mask = kvm_x86_ops.pmu_ops->get_event_mask(filter->flags);
639 for(i = 0; i < filter->nevents; i++)
640 if (filter->events[i] & ~event_mask)
641 return true;
642
643 return false;
644 }
645
--
0-DAY CI Kernel Test Service
https://01.org/lkp
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 2/4] selftests: kvm/x86: Add flags when creating a pmu event filter
2022-05-23 21:41 [PATCH 0/4] kvm: x86/pmu: Introduce and test masked events Aaron Lewis
2022-05-23 21:41 ` [PATCH 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter Aaron Lewis
@ 2022-05-23 21:41 ` Aaron Lewis
2022-05-23 21:41 ` [PATCH 3/4] selftests: kvm/x86: Add testing for masked events Aaron Lewis
2022-05-23 21:41 ` [PATCH 4/4] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER Aaron Lewis
3 siblings, 0 replies; 8+ messages in thread
From: Aaron Lewis @ 2022-05-23 21:41 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
Now that the flags field can be non-zero, pass it in when creating a
pmu event filter.
This is needed in preparation for testing masked events.
No functional change intended.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
.../testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 93d77574b255..4bff4c71ac45 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -222,14 +222,15 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents)
static struct kvm_pmu_event_filter *
-create_pmu_event_filter(const uint64_t event_list[],
- int nevents, uint32_t action)
+create_pmu_event_filter(const uint64_t event_list[], int nevents,
+ uint32_t action, uint32_t flags)
{
struct kvm_pmu_event_filter *f;
int i;
f = alloc_pmu_event_filter(nevents);
f->action = action;
+ f->flags = flags;
for (i = 0; i < nevents; i++)
f->events[i] = event_list[i];
@@ -240,7 +241,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action)
{
return create_pmu_event_filter(event_list,
ARRAY_SIZE(event_list),
- action);
+ action, 0);
}
/*
@@ -287,7 +288,7 @@ static void test_amd_deny_list(struct kvm_vm *vm)
struct kvm_pmu_event_filter *f;
uint64_t count;
- f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY);
+ f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0);
count = test_with_filter(vm, f);
free(f);
--
2.36.1.124.g0e6072fb45-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH 3/4] selftests: kvm/x86: Add testing for masked events
2022-05-23 21:41 [PATCH 0/4] kvm: x86/pmu: Introduce and test masked events Aaron Lewis
2022-05-23 21:41 ` [PATCH 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter Aaron Lewis
2022-05-23 21:41 ` [PATCH 2/4] selftests: kvm/x86: Add flags when creating a " Aaron Lewis
@ 2022-05-23 21:41 ` Aaron Lewis
2022-05-23 21:41 ` [PATCH 4/4] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER Aaron Lewis
3 siblings, 0 replies; 8+ messages in thread
From: Aaron Lewis @ 2022-05-23 21:41 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
Add testing for the pmu event filter's masked events. These tests run
through different ways of finding an event the guest is attempting to
program in an event list. For any given eventsel, there may be
multiple instances of it in an event list. These tests try different
ways of looking up a match to force the matching algorithm to walk the
relevant eventsel's and ensure it is able to a) find a match, b) stays
within its bounds.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
.../kvm/x86_64/pmu_event_filter_test.c | 107 ++++++++++++++++++
1 file changed, 107 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 4bff4c71ac45..4071043bbe26 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -18,8 +18,12 @@
/*
* In lieu of copying perf_event.h into tools...
*/
+#define ARCH_PERFMON_EVENTSEL_EVENT 0x000000FFULL
#define ARCH_PERFMON_EVENTSEL_OS (1ULL << 17)
#define ARCH_PERFMON_EVENTSEL_ENABLE (1ULL << 22)
+#define AMD64_EVENTSEL_EVENT \
+ (ARCH_PERFMON_EVENTSEL_EVENT | (0x0FULL << 32))
+
union cpuid10_eax {
struct {
@@ -445,6 +449,107 @@ static bool use_amd_pmu(void)
is_zen3(entry->eax));
}
+#define ENCODE_MASKED_EVENT(select, mask, match, invert) \
+ KVM_PMU_EVENT_ENCODE_MASKED_EVENT(select, mask, match, invert)
+
+static void expect_success(uint64_t count)
+{
+ if (count != NUM_BRANCHES)
+ pr_info("masked filter: Branch instructions retired = %lu (expected %u)\n",
+ count, NUM_BRANCHES);
+ TEST_ASSERT(count, "Allowed PMU event is not counting");
+}
+
+static void expect_failure(uint64_t count)
+{
+ if (count)
+ pr_info("masked filter: Branch instructions retired = %lu (expected 0)\n",
+ count);
+ TEST_ASSERT(!count, "Disallowed PMU Event is counting");
+}
+
+static void run_masked_filter_test(struct kvm_vm *vm, uint64_t masked_events[],
+ const int nmasked_events, uint64_t event,
+ uint32_t action, bool invert,
+ void (*expected_func)(uint64_t))
+{
+ struct kvm_pmu_event_filter *f;
+ uint64_t old_event;
+ uint64_t count;
+ int i;
+
+ for (i = 0; i < nmasked_events; i++) {
+ if((masked_events[i] & AMD64_EVENTSEL_EVENT) != EVENT(event, 0))
+ continue;
+
+ old_event = masked_events[i];
+
+ masked_events[i] =
+ ENCODE_MASKED_EVENT(event, ~0x00, 0x00, invert);
+
+ f = create_pmu_event_filter(masked_events, nmasked_events, action,
+ KVM_PMU_EVENT_FLAG_MASKED_EVENTS);
+
+ count = test_with_filter(vm, f);
+ free(f);
+
+ expected_func(count);
+
+ masked_events[i] = old_event;
+ }
+}
+
+static void run_masked_filter_tests(struct kvm_vm *vm, uint64_t masked_events[],
+ const int nmasked_events, uint64_t event)
+{
+ run_masked_filter_test(vm, masked_events, nmasked_events, event,
+ KVM_PMU_EVENT_ALLOW, /*invert=*/false,
+ expect_success);
+ run_masked_filter_test(vm, masked_events, nmasked_events, event,
+ KVM_PMU_EVENT_ALLOW, /*invert=*/true,
+ expect_failure);
+ run_masked_filter_test(vm, masked_events, nmasked_events, event,
+ KVM_PMU_EVENT_DENY, /*invert=*/false,
+ expect_failure);
+ run_masked_filter_test(vm, masked_events, nmasked_events, event,
+ KVM_PMU_EVENT_DENY, /*invert=*/true,
+ expect_success);
+}
+
+static void test_masked_filters(struct kvm_vm *vm)
+{
+ uint64_t masked_events[11];
+ const int nmasked_events = ARRAY_SIZE(masked_events);
+ uint64_t prev_event, event, next_event;
+ int i;
+
+ if (use_intel_pmu()) {
+ /* Instructions retired */
+ prev_event = 0xc0;
+ event = INTEL_BR_RETIRED;
+ /* Branch misses retired */
+ next_event = 0xc5;
+ } else {
+ TEST_ASSERT(use_amd_pmu(), "Unknown platform");
+ /* Retired instructions */
+ prev_event = 0xc0;
+ event = AMD_ZEN_BR_RETIRED;
+ /* Retired branch instructions mispredicted */
+ next_event = 0xc3;
+ }
+
+ for (i = 0; i < nmasked_events; i++)
+ masked_events[i] =
+ ENCODE_MASKED_EVENT(event, ~0x00, i+1, 0);
+
+ run_masked_filter_tests(vm, masked_events, nmasked_events, event);
+
+ masked_events[0] = ENCODE_MASKED_EVENT(prev_event, ~0x00, 0, 0);
+ masked_events[1] = ENCODE_MASKED_EVENT(next_event, ~0x00, 0, 0);
+
+ run_masked_filter_tests(vm, masked_events, nmasked_events, event);
+}
+
int main(int argc, char *argv[])
{
void (*guest_code)(void) = NULL;
@@ -489,6 +594,8 @@ int main(int argc, char *argv[])
test_not_member_deny_list(vm);
test_not_member_allow_list(vm);
+ test_masked_filters(vm);
+
kvm_vm_free(vm);
test_pmu_config_disable(guest_code);
--
2.36.1.124.g0e6072fb45-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH 4/4] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER
2022-05-23 21:41 [PATCH 0/4] kvm: x86/pmu: Introduce and test masked events Aaron Lewis
` (2 preceding siblings ...)
2022-05-23 21:41 ` [PATCH 3/4] selftests: kvm/x86: Add testing for masked events Aaron Lewis
@ 2022-05-23 21:41 ` Aaron Lewis
3 siblings, 0 replies; 8+ messages in thread
From: Aaron Lewis @ 2022-05-23 21:41 UTC (permalink / raw)
To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis
Test that masked events are not using invalid bits, and if they are,
ensure the pmu event filter is not accepted by KVM_SET_PMU_EVENT_FILTER.
The only valid bits that can be used for masked events are set when
using KVM_PMU_EVENT_ENCODE_MASKED_EVENT() with one caveat. If any bits
in the high nybble[1] of the eventsel for AMD are used on Intel setting
the pmu event filter with KVM_SET_PMU_EVENT_FILTER will fail.
Also, because no validation was being done on the event list prior to
the introduction of masked events, verify that this continues for the
original event type (flags == 0). If invalid bits are set (bits other
than eventsel+umask) the pmu event filter will be accepted by
KVM_SET_PMU_EVENT_FILTER.
[1] bits 35:32 in the event and bits 11:8 in the eventsel.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
.../kvm/x86_64/pmu_event_filter_test.c | 31 +++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 4071043bbe26..403143ee0b6d 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -550,6 +550,36 @@ static void test_masked_filters(struct kvm_vm *vm)
run_masked_filter_tests(vm, masked_events, nmasked_events, event);
}
+static void test_filter_ioctl(struct kvm_vm *vm)
+{
+ struct kvm_pmu_event_filter *f;
+ uint64_t e = ~0ul;
+ int r;
+
+ /*
+ * Unfortunately having invalid bits set in event data is expected to
+ * pass when flags == 0 (bits other than eventsel+umask).
+ */
+ f = create_pmu_event_filter(&e, 1, KVM_PMU_EVENT_ALLOW, 0);
+ r = _vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f);
+ TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing");
+ free(f);
+
+ f = create_pmu_event_filter(&e, 1, KVM_PMU_EVENT_ALLOW,
+ KVM_PMU_EVENT_FLAG_MASKED_EVENTS);
+ r = _vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f);
+ TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail");
+ free(f);
+
+ e = ENCODE_MASKED_EVENT(0xff, 0xff, 0xff, 0xf);
+
+ f = create_pmu_event_filter(&e, 1, KVM_PMU_EVENT_ALLOW,
+ KVM_PMU_EVENT_FLAG_MASKED_EVENTS);
+ r = _vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f);
+ TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing");
+ free(f);
+}
+
int main(int argc, char *argv[])
{
void (*guest_code)(void) = NULL;
@@ -595,6 +625,7 @@ int main(int argc, char *argv[])
test_not_member_allow_list(vm);
test_masked_filters(vm);
+ test_filter_ioctl(vm);
kvm_vm_free(vm);
--
2.36.1.124.g0e6072fb45-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread