linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Namhyung Kim <namhyung@kernel.org>
To: kan.liang@linux.intel.com
Cc: acme@kernel.org, mingo@redhat.com, peterz@infradead.org,
	jolsa@kernel.org, irogers@google.com,
	linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] perf record: Fix "read LOST count failed" msg with sample read
Date: Wed, 1 Mar 2023 09:27:44 -0800	[thread overview]
Message-ID: <CAM9d7cgwmuCqjdFsW6VUTLeL-BJuoP2F2N6RaWGTuThVFaGFKA@mail.gmail.com> (raw)
In-Reply-To: <20230301150413.27011-1-kan.liang@linux.intel.com>

Hi Kan,

On Wed, Mar 1, 2023 at 7:04 AM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> Hundreds of "read LOST count failed" error messages may be displayed,
> when the below command is launched.
>
> perf record -e '{cpu/mem-loads-aux/,cpu/event=0xcd,umask=0x1/}:S' -a
>
> According to the commit 89e3106fa25f ("libperf: Handle read format in
> perf_evsel__read()"), the PERF_FORMAT_GROUP is only available for the
> leader. However, the record__read_lost_samples() goes through every
> entry of an evlist, which includes both leader and member. The member
> event errors out and triggers the error message. Since there may be
> hundreds of CPUs on a server, the message will be printed hundreds of
> times, which is very annoying.
>
> The message itself is correct, but the pr_err is a overkill. Other error
> messages in the record__read_lost_samples() are all pr_debug. To make
> the output format consistent, change the pr_err("read LOST count
> failed\n"); to pr_debug("read LOST count failed\n");.
> User can still get the message via -v option.
>
> Fixes: e3a23261ad06 ("perf record: Read and inject LOST_SAMPLES events")
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>

Acked-by: Namhyung Kim <namhyung@kernel.org>

Thanks,
Namhyung


> ---
>  tools/perf/builtin-record.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index 8374117e66f6..be7c0c29d15b 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -1866,7 +1866,7 @@ static void __record__read_lost_samples(struct record *rec, struct evsel *evsel,
>         int id_hdr_size;
>
>         if (perf_evsel__read(&evsel->core, cpu_idx, thread_idx, &count) < 0) {
> -               pr_err("read LOST count failed\n");
> +               pr_debug("read LOST count failed\n");
>                 return;
>         }
>
> --
> 2.35.1
>

      reply	other threads:[~2023-03-01 17:28 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-01 15:04 [PATCH] perf record: Fix "read LOST count failed" msg with sample read kan.liang
2023-03-01 17:27 ` Namhyung Kim [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAM9d7cgwmuCqjdFsW6VUTLeL-BJuoP2F2N6RaWGTuThVFaGFKA@mail.gmail.com \
    --to=namhyung@kernel.org \
    --cc=acme@kernel.org \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).