From: Hannes Reinecke <hare@suse.de>
To: Sagi Grimberg <sagi@grimberg.me>,
Hannes Reinecke <hare@kernel.org>, Christoph Hellwig <hch@lst.de>,
netdev@vger.kernel.org
Cc: Keith Busch <kbusch@kernel.org>, linux-nvme@lists.infradead.org
Subject: Re: [PATCH 6/8] nvme-tcp: reduce callback lock contention
Date: Thu, 18 Jul 2024 08:42:31 +0200 [thread overview]
Message-ID: <a5473f69-5404-4c38-85d9-ca91c5160361@suse.de> (raw)
In-Reply-To: <9b8b57ca-83ae-43a4-84c6-33017dc81a32@grimberg.me>
On 7/17/24 23:19, Sagi Grimberg wrote:
>
>
> On 16/07/2024 10:36, Hannes Reinecke wrote:
>> From: Hannes Reinecke <hare@suse.de>
>>
>> We have heavily queued tx and rx flows, so callbacks might happen
>> at the same time. As the callbacks influence the state machine we
>> really should remove contention here to not impact I/O performance.
>>
>> Signed-off-by: Hannes Reinecke <hare@kernel.org>
>> ---
>> drivers/nvme/host/tcp.c | 14 ++++++++------
>> 1 file changed, 8 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
>> index a758fbb3f9bb..9634c16d7bc0 100644
>> --- a/drivers/nvme/host/tcp.c
>> +++ b/drivers/nvme/host/tcp.c
>> @@ -1153,28 +1153,28 @@ static void nvme_tcp_data_ready(struct sock *sk)
>> trace_sk_data_ready(sk);
>> - read_lock_bh(&sk->sk_callback_lock);
>> - queue = sk->sk_user_data;
>> + rcu_read_lock();
>> + queue = rcu_dereference_sk_user_data(sk);
>> if (likely(queue && queue->rd_enabled) &&
>> !test_bit(NVME_TCP_Q_POLLING, &queue->flags)) {
>> queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
>> queue->data_ready_cnt++;
>> }
>> - read_unlock_bh(&sk->sk_callback_lock);
>> + rcu_read_unlock();
>
> Umm, this looks dangerous...
>
> Please give a concrete (numeric) justification for this change, and
> preferably a big fat comment
> on why it is safe to do (for either .data_ready or .write_space).
>
> Is there any precedence of another tcp ulp that does this? I'd like to
> have the netdev folks review this change. CC'ing netdev.
>
Reasoning here is that the queue itself (and with that, the workqueue
element) will _not_ be deleted once we set 'sk_user_data' to NULL.
The shutdown sequence is:
kernel_sock_shutdown(queue->sock, SHUT_RDWR);
nvme_tcp_restore_sock_ops(queue);
cancel_work_sync(&queue->io_work);
So first we're shutting down the socket (which cancels all I/O
calls in io_work), then restore the socket callbacks.
As these are rcu protected I'm calling synchronize_rcu() to
ensure all callbacks have left the rcu-critical section on
exit.
At a final step we are cancelling all work, ie ensuring that
any action triggered by the callbacks have completed.
But sure, comment is fine.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
next prev parent reply other threads:[~2024-07-18 6:42 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-16 7:36 [PATCHv3 0/8] nvme-tcp: improve scalability Hannes Reinecke
2024-07-16 7:36 ` [PATCH 1/8] nvme-tcp: switch TX deadline to microseconds and make it configurable Hannes Reinecke
2024-07-17 21:03 ` Sagi Grimberg
2024-07-18 6:30 ` Hannes Reinecke
2024-07-16 7:36 ` [PATCH 2/8] nvme-tcp: io_work stall debugging Hannes Reinecke
2024-07-17 21:05 ` Sagi Grimberg
2024-07-16 7:36 ` [PATCH 3/8] nvme-tcp: re-init request list entries Hannes Reinecke
2024-07-17 21:23 ` Sagi Grimberg
2024-07-16 7:36 ` [PATCH 4/8] nvme-tcp: improve stall debugging Hannes Reinecke
2024-07-17 21:11 ` Sagi Grimberg
2024-07-16 7:36 ` [PATCH 5/8] nvme-tcp: debugfs entries for latency statistics Hannes Reinecke
2024-07-17 21:14 ` Sagi Grimberg
2024-07-16 7:36 ` [PATCH 6/8] nvme-tcp: reduce callback lock contention Hannes Reinecke
2024-07-17 21:19 ` Sagi Grimberg
2024-07-18 6:42 ` Hannes Reinecke [this message]
2024-07-21 11:46 ` Sagi Grimberg
2024-07-16 7:36 ` [PATCH 7/8] nvme-tcp: check for SOCK_NOSPACE before sending Hannes Reinecke
2024-07-17 21:19 ` Sagi Grimberg
2024-07-16 7:36 ` [PATCH 8/8] nvme-tcp: align I/O cpu with blk-mq mapping Hannes Reinecke
2024-07-17 21:34 ` Sagi Grimberg
2024-08-13 19:36 ` Sagi Grimberg
2024-07-17 21:01 ` [PATCHv3 0/8] nvme-tcp: improve scalability Sagi Grimberg
2024-07-18 6:20 ` Hannes Reinecke
2024-07-21 12:05 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a5473f69-5404-4c38-85d9-ca91c5160361@suse.de \
--to=hare@suse.de \
--cc=hare@kernel.org \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=netdev@vger.kernel.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox