From: "Liang, Kan" <kan.liang@linux.intel.com>
To: Vince Weaver <vincent.weaver@maine.edu>
Cc: Peter Zijlstra <peterz@infradead.org>,
linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Jiri Olsa <jolsa@redhat.com>, Namhyung Kim <namhyung@kernel.org>,
Stephane Eranian <eranian@google.com>
Subject: Re: [perf] perf_fuzzer causes crash in intel_pmu_drain_pebs_nhm()
Date: Thu, 25 Feb 2021 15:15:26 -0500 [thread overview]
Message-ID: <9b3f84e0-e1cc-cebe-43b6-fa062484ad28@linux.intel.com> (raw)
In-Reply-To: <2a655469-de9d-c80-dd7f-26436d6f03a@maine.edu>
On 2/11/2021 5:14 PM, Vince Weaver wrote:
> On Thu, 11 Feb 2021, Liang, Kan wrote:
>
>>> On Thu, Jan 28, 2021 at 02:49:47PM -0500, Vince Weaver wrote:
>> I'd like to reproduce it on my machine.
>> Is this issue only found in a Haswell client machine?
>>
>> To reproduce the issue, can I use ./perf_fuzzer under perf_event_tests/fuzzer?
>> Do I need to apply any parameters with ./perf_fuzzer?
>>
>> Usually how long does it take to reproduce the issue?
>
> On my machine if I run the commands
> echo 1 > /proc/sys/kernel/nmi_watchdog
> echo 0 > /proc/sys/kernel/perf_event_paranoid
> echo 1000 > /proc/sys/kernel/perf_event_max_sample_rate
> ./perf_fuzzer -s 30000 -r 1611784483
>
> it is repeatable within a minute, but because of the nature of the fuzzer
> it probably won't work for you because the random events will diverge
> based on the different configs of the system.
>
> I can try to generate a simple reproducer, I've just been extremely busy
> here at work and haven't had the chance.
>
> If you want to try to reproduce it the hard way, run the
> ./fast_repro99.sh
> script in the perf_fuzzer directory. It will start fuzzing. My machine
> turned up the issue within a day or so.
>
Sorry for the late response. Just want to let you know I'm still trying
to reproduce the issue.
I only have a Haswell server on hand. I run the ./fast_repro99.sh on the
machine for 3 days, but I didn't observe any crash.
Now, I'm looking for a HSW client in my lab. I will check if I can
reproduce it on a client machine.
If you have a simple reproducer, please let me know.
Thanks,
Kan
next prev parent reply other threads:[~2021-02-25 20:17 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-28 14:25 [perf] perf_fuzzer causes crash in intel_pmu_drain_pebs_nhm() Vince Weaver
2021-01-28 19:49 ` Vince Weaver
2021-02-11 14:53 ` Peter Zijlstra
2021-02-11 21:37 ` Liang, Kan
2021-02-11 22:14 ` Vince Weaver
2021-02-25 20:15 ` Liang, Kan [this message]
2021-03-01 13:20 ` Liang, Kan
2021-03-02 5:29 ` Vince Weaver
2021-03-03 18:16 ` [perf] perf_fuzzer causes unchecked MSR access error Vince Weaver
2021-03-03 19:28 ` Stephane Eranian
2021-03-03 20:00 ` Liang, Kan
2021-03-03 20:22 ` Vince Weaver
2021-03-04 19:33 ` Liang, Kan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9b3f84e0-e1cc-cebe-43b6-fa062484ad28@linux.intel.com \
--to=kan.liang@linux.intel.com \
--cc=acme@kernel.org \
--cc=alexander.shishkin@linux.intel.com \
--cc=eranian@google.com \
--cc=jolsa@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=vincent.weaver@maine.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox