From: Schspa Shi <schspa@gmail.com>
To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Luiz Augusto von Dentz <luiz.dentz@gmail.com>,
Marcel Holtmann <marcel@holtmann.org>,
Johan Hedberg <johan.hedberg@gmail.com>,
syzbot <syzbot+243b7d89777f90f7613b@syzkaller.appspotmail.com>,
syzkaller-bugs@googlegroups.com,
Lai Jiangshan <jiangshanlai@gmail.com>,
"linux-bluetooth@vger.kernel.org"
<linux-bluetooth@vger.kernel.org>, Tejun Heo <tj@kernel.org>
Subject: Re: [PATCH] Bluetooth: use hdev->workqueue when queuing hdev->{cmd,ncmd}_timer works
Date: Mon, 05 Sep 2022 20:26:52 +0800 [thread overview]
Message-ID: <m2bkruxarg.fsf@gmail.com> (raw)
In-Reply-To: <24660886-da42-c129-77e6-87c344879a3b@I-love.SAKURA.ne.jp>
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> writes:
> On 2022/09/05 17:24, Schspa Shi wrote:
>>
>> Tejun Heo <tj@kernel.org> writes:
>>
>>> Hello,
>>>
>>> On Sat, Sep 03, 2022 at 07:11:18PM -0700, Luiz Augusto von Dentz wrote:
>>>> And we can check for __WQ_DRAINING? Anyway checking
>>>
>>> Please don't do that. That's an internal flag. It shouldn't be *that*
>>> difficult to avoid this without peeking into wq internal state.
>>>
>>> Thanks.
>>
>> It seems we only need to change hdev->{cmd,ncmd}_timer to
>> hdev->workqueue, there will be no race because drain_workqueue will
>> flush all pending work internally.
>
> True for queue_work(), not always true for queue_delayed_work(). Explained below.
>
Ok, you are right, got it now.
>> Any new timeout work will see HCI_CMD_DRAIN_WORKQUEUE flags after we
>> cancel and flushed all the delayed work.
>>
>
> If you don't mind calling
>
> queue_work(&hdev->cmd_work) followed by hci_cmd_work() (case A below)
>
> and/or
>
> queue_delayed_work(&hdev->ncmd_timer) potentially followed by hci_ncmd_timeout()/hci_reset_dev() (case B and C below)
>
> after observing HCI_CMD_DRAIN_WORKQUEUE flag.
> We need to use RCU protection if you mind one of these.
>
>
>
> Case A:
>
> hci_dev_do_reset() {
> hci_cmd_work() {
> if (test_bit(HCI_RESET, &hdev->flags) ||
> hci_dev_test_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE))
> cancel_delayed_work(&hdev->cmd_timer);
> else
> queue_delayed_work(hdev->workqueue, &hdev->cmd_timer, HCI_CMD_TIMEOUT);
> } else {
> skb_queue_head(&hdev->cmd_q, skb);
> hci_dev_set_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE);
> cancel_delayed_work(&hdev->cmd_timer);
> cancel_delayed_work(&hdev->ncmd_timer);
> queue_work(hdev->workqueue, &hdev->cmd_work); // Queuing after setting HCI_CMD_DRAIN_WORKQUEUE despite the intent of HCI_CMD_DRAIN_WORKQUEUE...
> drain_workqueue(hdev->workqueue); // Will wait for hci_cmd_timeout() queued by queue_work() to complete.
>
> }
>
> // Actual flush() happens here.
>
> hci_dev_clear_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE);
> }
>
>
>
> Case B:
>
> hci_dev_do_reset() {
> handle_cmd_cnt_and_timer(struct hci_dev *hdev, u8 ncmd) {
> if (!hci_dev_test_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE))
> hci_dev_set_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE);
> cancel_delayed_work(&hdev->cmd_timer);
> queue_delayed_work(hdev->workqueue, &hdev->ncmd_timer, HCI_NCMD_TIMEOUT);
> cancel_delayed_work(&hdev->ncmd_timer); // May or may not cancel hci_ncmd_timeout() queued by queue_delayed_work().
> drain_workqueue(hdev->workqueue); // Will wait for hci_ncmd_timeout() queued by queue_delayed_work() to complete if cancel_delayed_work() failed to cancel.
>
> }
>
> // Actual flush() happens here.
>
> hci_dev_clear_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE);
> }
>
>
>
> Case C:
>
> hci_dev_do_reset() {
> handle_cmd_cnt_and_timer(struct hci_dev *hdev, u8 ncmd) {
> if (!hci_dev_test_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE))
> hci_dev_set_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE);
> cancel_delayed_work(&hdev->cmd_timer);
> cancel_delayed_work(&hdev->ncmd_timer); // Does nothing.
> queue_delayed_work(hdev->workqueue, &hdev->ncmd_timer, HCI_NCMD_TIMEOUT);
> drain_workqueue(hdev->workqueue); // Will wait for hci_ncmd_timeout() queued by queue_delayed_work() to complete if delay timer has expired.
>
> }
>
> // Actual flush() happens here, but hci_ncmd_timeout() queued by queue_delayed_work() can be running if delay timer has not expired as of calling drain_workqueue().
>
> hci_dev_clear_flag(hdev, HCI_CMD_DRAIN_WORKQUEUE);
> }
--
BRs
Schspa Shi
next prev parent reply other threads:[~2022-09-05 12:39 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <00000000000016512d05e76bd837@google.com>
2022-08-30 14:08 ` [syzbot] upstream test error: WARNING in __queue_work Lai Jiangshan
2022-08-30 17:37 ` Luiz Augusto von Dentz
2022-09-02 12:28 ` Aleksandr Nogikh
2022-09-02 11:23 ` [PATCH] Bluetooth: use hdev->workqueue when queuing hdev->{cmd,ncmd}_timer works Tetsuo Handa
2022-09-02 12:00 ` bluez.test.bot
2022-09-02 18:45 ` [PATCH] " Luiz Augusto von Dentz
2022-09-02 21:31 ` Luiz Augusto von Dentz
2022-09-03 6:49 ` Tetsuo Handa
2022-09-04 2:11 ` Luiz Augusto von Dentz
2022-09-04 2:20 ` Tejun Heo
2022-09-05 8:24 ` Schspa Shi
2022-09-05 11:23 ` Tetsuo Handa
2022-09-05 12:26 ` Schspa Shi [this message]
2022-09-19 17:30 ` patchwork-bot+bluetooth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=m2bkruxarg.fsf@gmail.com \
--to=schspa@gmail.com \
--cc=jiangshanlai@gmail.com \
--cc=johan.hedberg@gmail.com \
--cc=linux-bluetooth@vger.kernel.org \
--cc=luiz.dentz@gmail.com \
--cc=marcel@holtmann.org \
--cc=penguin-kernel@I-love.SAKURA.ne.jp \
--cc=syzbot+243b7d89777f90f7613b@syzkaller.appspotmail.com \
--cc=syzkaller-bugs@googlegroups.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox