From: Sagi Grimberg <sagi@grimberg.me>
To: Hannes Reinecke <hare@suse.de>, Christoph Hellwig <hch@lst.de>,
Hannes Reinecke <hare@kernel.org>
Cc: Keith Busch <kbusch@kernel.org>, linux-nvme@lists.infradead.org
Subject: Re: [PATCHv2] nvme-tcp: align I/O cpu with blk-mq mapping
Date: Wed, 19 Jun 2024 18:58:22 +0300 [thread overview]
Message-ID: <b4dc2a6a-1663-4144-a607-edb6beef48c0@grimberg.me> (raw)
In-Reply-To: <eb0023bc-3a3b-4f31-955b-c6aad4cad4a3@suse.de>
>> I see how you address multiple controllers falling into the same
>> mappings case in your patch.
>> You could have selected a different mq_map entry for each controller
>> (out of the entries that map to the qid).
>>
> Looked at it, but hadn't any idea how to figure out the load.
> The load is actually per-cpu, but we only have per controller structures.
> So we would need to introduce a per-cpu counter, detailing out the
> number of queues scheduled on that CPU.
> But that won't help with the CPU oversubscription issue; we still
> might have substantially higher number of overall queues than we have
> CPUs...
I think that it would still be better than what you have right now:
IIUC Right now you will have for all controllers (based on your example):
queue 1: using cpu 6
queue 2: using cpu 9
queue 3: using cpu 18
But selecting a different mq_map entry can give:
ctrl1:
queue 1: using cpu 6
queue 2: using cpu 9
queue 3: using cpu 18
ctrl2:
queue 1: using cpu 7
queue 2: using cpu 10
queue 3: using cpu 19
ctrl3:
queue 1: using cpu 8
queue 2: using cpu 11
queue 3: using cpu 20
ctrl4:
queue 1: using cpu 54
queue 2: using cpu 57
queue 3: using cpu 66
and so on...
>
>>>
>>> Not sure how wq_unbound helps in this case; in theory the workqueue
>>> items can be pushed on arbitrary CPUs, but that only leads to even
>>> worse
>>> thread bouncing.
>>>
>>> However, topic for ALPSS. We really should have some sore of
>>> backpressure here.
>>
>> I have a patch that was sitting for some time now, to make the RX
>> path run directly
>> from softirq, which should make RX execute from the cpu core mapped
>> to the RSS hash.
>> Perhaps you or your customer can give it a go.
>>
> No s**t. That is pretty much what I wanted to do.
> I'm sure to give it a go.
> Thanks for that!
You will need another prep patch for it:
--
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 3649987c0a2d..b6ea7e337eb8 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -955,6 +955,18 @@ static int nvme_tcp_recv_skb(read_descriptor_t
*desc, struct sk_buff *skb,
return consumed;
}
+static int nvme_tcp_try_recv_locked(struct nvme_tcp_queue *queue)
+{
+ struct socket *sock = queue->sock;
+ struct sock *sk = sock->sk;
+ read_descriptor_t rd_desc;
+
+ rd_desc.arg.data = queue;
+ rd_desc.count = 1;
+ queue->nr_cqe = 0;
+ return sock->ops->read_sock(sk, &rd_desc, nvme_tcp_recv_skb);
+}
+
static void nvme_tcp_data_ready(struct sock *sk)
{
struct nvme_tcp_queue *queue;
@@ -1251,16 +1263,11 @@ static int nvme_tcp_try_send(struct
nvme_tcp_queue *queue)
static int nvme_tcp_try_recv(struct nvme_tcp_queue *queue)
{
- struct socket *sock = queue->sock;
- struct sock *sk = sock->sk;
- read_descriptor_t rd_desc;
+ struct sock *sk = queue->sock->sk;
int consumed;
- rd_desc.arg.data = queue;
- rd_desc.count = 1;
lock_sock(sk);
- queue->nr_cqe = 0;
- consumed = sock->ops->read_sock(sk, &rd_desc, nvme_tcp_recv_skb);
+ consumed = nvme_tcp_try_recv_locked(queue);
release_sock(sk);
return consumed;
}
--
next prev parent reply other threads:[~2024-06-19 15:58 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-19 14:55 [PATCHv2] nvme-tcp: align I/O cpu with blk-mq mapping Hannes Reinecke
2024-06-19 14:59 ` Christoph Hellwig
2024-06-19 15:23 ` Hannes Reinecke
2024-06-19 15:43 ` Sagi Grimberg
2024-06-19 15:49 ` Hannes Reinecke
2024-06-19 15:58 ` Sagi Grimberg [this message]
2024-06-24 10:02 ` Sagi Grimberg
2024-06-24 20:01 ` Kamaljit Singh
2024-06-25 6:49 ` Sagi Grimberg
2024-06-25 6:05 ` Hannes Reinecke
2024-06-25 6:51 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b4dc2a6a-1663-4144-a607-edb6beef48c0@grimberg.me \
--to=sagi@grimberg.me \
--cc=hare@kernel.org \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox