public inbox for linux-perf-users@vger.kernel.org
 help / color / mirror / Atom feed
From: "Mi, Dapeng" <dapeng1.mi@linux.intel.com>
To: sashiko@lists.linux.dev
Cc: linux-perf-users@vger.kernel.org
Subject: Re: [Patch v2 1/4] perf/x86/intel: Clear stale ACR mask before updating new mask
Date: Tue, 21 Apr 2026 13:03:48 +0800	[thread overview]
Message-ID: <3a54cbc0-a707-438f-b87e-5f608b5c26db@linux.intel.com> (raw)
In-Reply-To: <717f00c7-be8d-4f69-88ab-5a060a6a7e65@linux.intel.com>


On 4/20/2026 2:29 PM, Mi, Dapeng wrote:
> On 4/20/2026 11:24 AM, sashiko-bot@kernel.org wrote:
>> Sashiko AI review found 1 potential issue(s):
>> - [High] Unvalidated user input in ACR setup leads to cross-user hardware state corruption and PMU DoS
>> --
>>
>> commit 67dc3d7b096879ed6418f54978bf45536496156f
>> Author: Dapeng Mi <dapeng1.mi@linux.intel.com>
>>
>> perf/x86/intel: Clear stale ACR mask before updating new mask
>>
>> This commit zeroes out the ACR mask (event->hw.config1) for all scheduled
>> events before applying a new mask. This ensures that bits from a previous
>> stale ACR mask do not persist and pollute the hardware state during event
>> rescheduling.
>>
>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>>> index 4768236c054bb..774ae9a4eeaf8 100644
>>> --- a/arch/x86/events/intel/core.c
>>> +++ b/arch/x86/events/intel/core.c
>>> @@ -3334,6 +3334,12 @@ static void intel_pmu_acr_late_setup(struct cpu_hw_events *cpuc)
>>>  	struct perf_event *event, *leader;
>>>  	int i, j, idx;
>>>  
>>> +	/* Clear stale ACR mask first. */
>>> +	for (i = 0; i < cpuc->n_events; i++) {
>>> +		event = cpuc->event_list[i];
>>> +		event->hw.config1 = 0;
>>> +	}
>>> +
>> Does this new clearing loop inadvertently amplify an existing regression where
>> a single user can disable Auto Counter Reload for all processes on the same CPU?
>>
>> Looking further down in the same function, the code iterates over
>> user-controlled bits from event->attr.config2:
>>
>> arch/x86/events/intel/core.c:intel_pmu_acr_late_setup() {
>>     ...
>>     for_each_set_bit(idx, (unsigned long *)&event->attr.config2, X86_PMC_IDX_MAX) {
>>         if (i + idx >= cpuc->n_events ||
>>             !is_acr_event_group(cpuc->event_list[i + idx]))
>>             return;
>>         __set_bit(cpuc->assign[i + idx], (unsigned long *)&event->hw.config1);
>>     }
>>     ...
>> }
>>
>> Because idx is derived from the user-controlled event->attr.config2 mask
>> without being validated against the actual size of the event group, a
>> user could provide an artificially large idx.
>>
>> The validation in intel_pmu_hw_config() only checks the weight of the mask
>> using hweight64(), rather than checking the actual bit indices, leaving this
>> exposed. Subsequent patches in the series do not fix this.
>>
>> If the computed index hits a non-ACR event or exceeds cpuc->n_events, the
>> loop triggers an early return. Since the newly added code zeroes out
>> event->hw.config1 for all scheduled events before this loop, wouldn't this
>> early return leave all subsequent valid ACR events with a cleared mask?
>>
>> Additionally, if the out-of-bounds idx points to another user's valid ACR
>> group, is_acr_event_group() returns true, and the code proceeds to update
>> the event->hw.config1 with the victim's hardware counter index.
>>
>> Could this also result in cross-user state corruption of event->hw.config1
>> when the event overflows and cross-reloads the victim's counter?
> That looks a real potential issue. Current validation for the acr_mask
> needs to be enhanced. The acr_mask from user space should not exceeds the
> index bitmap of current ACR events group.
> Thanks.

The following changes would be done to address all current found issues.
 - Calculate the indices bitmap for each ACR events group. Any bits in
    the user-space mask not present in the group's bitmap are now dropped.
 - Instead of an early return on invalid bits, drop only the invalid
    portions and continue iterating through all ACR events to ensure full
    configuration.
 - Explicitly clear the stale hardware ACR mask for each event prior to
    writing the new configuration.

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 4768236c054b..1a2c268018a2 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3332,23 +3332,41 @@ static void intel_pmu_enable_event(struct
perf_event *event)
 static void intel_pmu_acr_late_setup(struct cpu_hw_events *cpuc)
 {
        struct perf_event *event, *leader;
-       int i, j, idx;
+       int i, j, k, bit, idx;
+       u64 group_mask;

        for (i = 0; i < cpuc->n_events; i++) {
                leader = cpuc->event_list[i];
                if (!is_acr_event_group(leader))
                        continue;

-               /* The ACR events must be contiguous. */
+               /* Find the last event of the ACR group. */
                for (j = i; j < cpuc->n_events; j++) {
                        event = cpuc->event_list[j];
                        if (event->group_leader != leader->group_leader)
                                break;
-                       for_each_set_bit(idx, (unsigned long
*)&event->attr.config2, X86_PMC_IDX_MAX) {
-                               if (i + idx >= cpuc->n_events ||
-                                   !is_acr_event_group(cpuc->event_list[i
+ idx]))
-                                       return;
-                               __set_bit(cpuc->assign[i + idx], (unsigned
long *)&event->hw.config1);
+               }
+
+               /* Figure out the group indices bitmap. */
+               group_mask = 0;
+               for (k = i; k < j; k++)
+                       group_mask |= BIT_ULL(cpuc->assign[k]);
+
+               /*
+                * Translate the user-space ACR mask (attr.config2) into
the physical
+                * counter bitmask (hw.config1) for each ACR event in the
group.
+                * NOTE: ACR event contiguity is guaranteed by
intel_pmu_hw_config().
+                */
+               for (k = i; k < j; k++) {
+                       event = cpuc->event_list[k];
+                       event->hw.config1 = 0;
+                       for_each_set_bit(bit, (unsigned long
*)&event->attr.config2, X86_PMC_IDX_MAX) {
+                               idx = i + bit;
+                               if (idx >= cpuc->n_events ||
+                                   !(BIT_ULL(cpuc->assign[idx]) &
group_mask) ||
+                                   !is_acr_event_group(cpuc->event_list[idx]))
+                                       continue;
+                               __set_bit(cpuc->assign[idx], (unsigned long
*)&event->hw.config1);
                        }
                }
                i = j - 1;


>
>
>>>  	for (i = 0; i < cpuc->n_events; i++) {
>>>  		leader = cpuc->event_list[i];
>>>  		if (!is_acr_event_group(leader))

  reply	other threads:[~2026-04-21  5:03 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-20  2:45 [Patch v2 0/4] perf/x86/intel: Fix several bugs of auto counter Dapeng Mi
2026-04-20  2:45 ` [Patch v2 1/4] perf/x86/intel: Clear stale ACR mask before updating new mask Dapeng Mi
2026-04-20  3:24   ` sashiko-bot
2026-04-20  6:29     ` Mi, Dapeng
2026-04-21  5:03       ` Mi, Dapeng [this message]
2026-04-21 22:29   ` Andi Kleen
2026-04-22  0:57     ` Mi, Dapeng
2026-04-20  2:45 ` [Patch v2 2/4] perf/x86/intel: Disable PMI for self-reloaded ACR events Dapeng Mi
2026-04-21 22:37   ` Andi Kleen
2026-04-22  1:24     ` Mi, Dapeng
2026-04-22 17:07       ` Andi Kleen
2026-04-23  1:01         ` Mi, Dapeng
2026-04-23  9:30           ` Mi, Dapeng
2026-04-20  2:45 ` [Patch v2 3/4] perf/x86/intel: Enable auto counter reload for DMR Dapeng Mi
2026-04-20  2:45 ` [Patch v2 4/4] perf/x86/intel: Consolidate MSR_IA32_PERF_CFG_C tracking Dapeng Mi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3a54cbc0-a707-438f-b87e-5f608b5c26db@linux.intel.com \
    --to=dapeng1.mi@linux.intel.com \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=sashiko@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox