From: Mathieu Desnoyers via lttng-dev <lttng-dev@lists.lttng.org>
To: Minlan Wang <wangminlan@szsandstone.com>
Cc: lttng-dev <lttng-dev@lists.lttng.org>, paulmck <paulmck@kernel.org>
Subject: Re: [lttng-dev] ***UNCHECKED*** Re: Re: urcu workqueue thread uses 99% of cpu while workqueue is empty
Date: Wed, 22 Jun 2022 16:52:00 -0400 (EDT) [thread overview]
Message-ID: <589742415.26853.1655931120526.JavaMail.zimbra@efficios.com> (raw)
In-Reply-To: <1779693726.26841.1655929703853.JavaMail.zimbra@efficios.com>
----- On Jun 22, 2022, at 4:28 PM, Mathieu Desnoyers mathieu.desnoyers@efficios.com wrote:
> ----- On Jun 22, 2022, at 9:19 AM, Mathieu Desnoyers
> mathieu.desnoyers@efficios.com wrote:
>
>> ----- On Jun 22, 2022, at 3:45 AM, Minlan Wang wangminlan@szsandstone.com wrote:
>>
>> [...]
>>
>> Hi Minlan,
>>
>>> --
>>> 1.8.3.1
>>>
>>>
>>> And the lttng session is configured to trace these events:
>>> kernel: syscall futex
>>
>> On the kernel side, in addition to the syscall futex, I would really like to see
>> what
>> happens in the scheduler, mainly the wait/wakeup tracepoints. This can be added
>> by using:
>>
>> lttng enable-event -k 'sched_*'
>>
>> This should help us confirm whether we indeed have a situation where queued wake
>> ups
>> happen to wake up a wait happening only later, which is unexpected by the
>> current liburcu
>> userspace code.
>>
>> [...]
>>
>>> ---
>>>
>>> The babletrace output of this session is pretty big, 6 MB in size, i put it in
>>> the attachment trace_0622.tar.bz2.
>>> Let my know if your mailbox can't handle such big attachment.
>>
>> It would be even better if you can share the binary trace, because then it's
>> easy to
>> load it in trace compass, cut away time ranges that don't matter, and lots of
>> other
>> useful stuff.
>
> I just found the relevant snippet of documentation in futex(5):
>
> FUTEX_WAIT
> Returns 0 if the caller was woken up. Note that a wake-up can
> also be caused by common futex usage patterns in unrelated code
> that happened to have previously used the futex word's memory
> location (e.g., typical futex-based implementations of Pthreads
> mutexes can cause this under some conditions). Therefore, call‐
> ers should always conservatively assume that a return value of 0
> can mean a spurious wake-up, and use the futex word's value
> (i.e., the user-space synchronization scheme) to decide whether
> to continue to block or not.
>
> I'm pretty sure this is what is happening here.
Here is the series of patches for review on gerrit:
remote: https://review.lttng.org/c/userspace-rcu/+/8441 Fix: workqueue: futex wait: handle spurious futex wakeups [NEW]
remote: https://review.lttng.org/c/userspace-rcu/+/8442 Fix: urcu: futex wait: handle spurious futex wakeups [NEW]
remote: https://review.lttng.org/c/userspace-rcu/+/8443 Fix: call_rcu: futex wait: handle spurious futex wakeups [NEW]
remote: https://review.lttng.org/c/userspace-rcu/+/8444 Fix: urcu-wait: futex wait: handle spurious futex wakeups [NEW]
remote: https://review.lttng.org/c/userspace-rcu/+/8445 Fix: defer_rcu: futex wait: handle spurious futex wakeups [NEW]
remote: https://review.lttng.org/c/userspace-rcu/+/8446 Fix: urcu-qsbr: futex wait: handle spurious futex wakeups [NEW]
Thanks,
Mathieu
>
> Thanks,
>
> Mathieu
>
>
>>
>> Thanks,
>>
>> Mathieu
>>
>> --
>> Mathieu Desnoyers
>> EfficiOS Inc.
>> http://www.efficios.com
>
> --
> Mathieu Desnoyers
> EfficiOS Inc.
> http://www.efficios.com
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
next prev parent reply other threads:[~2022-06-22 20:52 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-14 3:55 [lttng-dev] urcu workqueue thread uses 99% of cpu while workqueue is empty Minlan Wang via lttng-dev
2022-06-14 13:39 ` Mathieu Desnoyers via lttng-dev
2022-06-14 13:40 ` Mathieu Desnoyers via lttng-dev
2022-06-14 14:19 ` Mathieu Desnoyers via lttng-dev
2022-06-14 15:53 ` Mathieu Desnoyers via lttng-dev
2022-06-15 3:49 ` Minlan Wang via lttng-dev
2022-06-15 13:35 ` Mathieu Desnoyers via lttng-dev
2022-06-15 14:15 ` Mathieu Desnoyers via lttng-dev
2022-06-16 7:09 ` Minlan Wang via lttng-dev
2022-06-16 8:09 ` Minlan Wang via lttng-dev
2022-06-17 13:37 ` Mathieu Desnoyers via lttng-dev
2022-06-21 3:52 ` Minlan Wang via lttng-dev
2022-06-21 13:12 ` [lttng-dev] ***UNCHECKED*** " Mathieu Desnoyers via lttng-dev
2022-06-22 7:45 ` Minlan Wang via lttng-dev
2022-06-22 13:19 ` [lttng-dev] ***UNCHECKED*** " Mathieu Desnoyers via lttng-dev
2022-06-22 20:28 ` Mathieu Desnoyers via lttng-dev
2022-06-22 20:52 ` Mathieu Desnoyers via lttng-dev [this message]
2022-06-23 9:08 ` Minlan Wang via lttng-dev
[not found] ` <20220623034528.GA271179@localhost.localdomain>
2022-06-23 3:57 ` Minlan Wang via lttng-dev
2022-06-23 14:09 ` Mathieu Desnoyers via lttng-dev
2022-06-24 6:21 ` Minlan Wang via lttng-dev
2022-06-23 8:36 ` [lttng-dev] [PART 4/4] " Minlan Wang via lttng-dev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=589742415.26853.1655931120526.JavaMail.zimbra@efficios.com \
--to=lttng-dev@lists.lttng.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=paulmck@kernel.org \
--cc=wangminlan@szsandstone.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).