Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Sagi Grimberg <sagi@grimberg.me>, Hannes Reinecke <hare@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Keith Busch <kbusch@kernel.org>,
	linux-nvme@lists.infradead.org
Subject: Re: [PATCH 2/4] nvme-tcp: align I/O cpu with blk-mq mapping
Date: Thu, 4 Jul 2024 08:43:13 +0200	[thread overview]
Message-ID: <2ea2706d-65a1-419e-aa96-6ca353d954e0@suse.de> (raw)
In-Reply-To: <34d22ad9-1d10-44df-a131-c9aea18fde0c@grimberg.me>

On 7/3/24 21:38, Sagi Grimberg wrote:
[ .. ]
>>>
>>> We should make the io_cpu come from blk-mq hctx mapping by default, 
>>> and for every controller it should use a different cpu from the hctx 
>>> mapping. That is the default behavior. in the wq_unbound case, we 
>>> skip all of that and make io_cpu = WORK_CPU_UNBOUND, as it was before.
>>>
>>> I'm not sure I follow your logic.
>>>
>> Hehe. That's quite simple: there is none :-)
>> I have been tinkering with that approach in the last weeks, but got 
>> consistently _worse_ results than with the original implementation.
>> So I gave up on trying to make that the default.
> 
> What is the "original implementation" ?

nvme-6.10

> What is you target? nvmet?

nvmet with brd backend

> What is the fio job file you are using?

tiobench-example.fio from the fio samples

> what is the queue count? controller count?

96 queues, 4 subsystems, 2 controller each.

> What was the queue mapping?
> 
queue 0-5 maps to cpu 6-11
queue 6-11 maps to cpu 54-59
queue 12-17 maps to cpu 18-23
queue 18-23 maps to cpu 66-71
queue 24-29 maps to cpu 24-29
queue 30-35 maps to cpu 72-77
queue 36-41 maps to cpu 30-35
queue 42-47 maps to cpu 78-83
queue 48-53 maps to cpu 36-41
queue 54-59 maps to cpu 84-89
queue 60-65 maps to cpu 42-47
queue 66-71 maps to cpu 90-95
queue 72-77 maps to cpu 12-17
queue 78-83 maps to cpu 60-65
queue 84-89 maps to cpu 0-5
queue 90-95 maps to cpu 48-53

> Please lets NOT condition any of this on wq_unbound option at this 
> point. This modparam was introduced to address
> a specific issue. If we see IO timeouts, we should fix them, not tell 
> people to filp a modparam as a solution.
> 
Thing is, there is no 'best' solution. The current implementation is 
actually quite good in the single subsystem case. Issues start to appear
when doing performance testing with a really high load.
Reason for this is a high contention on the per-cpu workqueues, which 
are simply overwhelmed by doing I/O _and_ servicing 'normal' OS workload
like writing do disk etc.
Switching to wq_unbound reduces the contention and makes the system to 
scale better, but that scaling leads to a performance regression for
the single subsystem case.
(See my other mail for performance numbers)
So what is 'better'?

>>
>>>>
>>>> And it makes the 'CPU hogged' messages go away, which is a bonus in 
>>>> itself...
>>>
>>> Which messages? aren't these messages saying that the work spent too 
>>> much time? why are you describing the case where the work does not get
>>> cpu quota to run?
>>
>> I means these messages:
>>
>> workqueue: nvme_tcp_io_work [nvme_tcp] hogged CPU for >10000us 32771 
>> times, consider switching to WQ_UNBOUND
> 
> That means that we are spending too much time in io_work, This is a 
> separate bug. If you look at nvme_tcp_io_work it has
> a stop condition after 1 millisecond. However, when we call 
> nvme_tcp_try_recv() it just keeps receiving from the socket until
> the socket receive buffer has no more payload. So in theory nothing 
> prevents from the io_work from looping there forever.
> 
Oh, no. It's not the loop which is the problem. It's the actual sending
which takes long; in my test runs I've seen about 250 requests timing 
out, the majority of which was still pending on the send_list.
So the io_work function wasn't even running to fetch the requests off 
the list.

> This is indeed a bug that we need to address. Probably by setting 
> rd_desc.count to some limit, decrement it for every
> skb that we consume, and if we reach that limit and there are more skbs 
> pending, we break and self-requeue.
> 
> If we indeed spend much time processing a single queue in io_work, it is 
> possible that we have a starvation problem
> that is escalating to the timeouts you are seeing.
> 
See above; this is the problem. Most of the requests are still stuck on 
the send_list (with some even still on the req_list) when timeouts 
occur. This means the io_work function is not being scheduled fast 
enough (or often enough) to fetch the requests from the list.

My theory here is that this is due to us using bound workqueues;
each workqueue function has to execute on a given cpu, and we can
only schedule one io_work function per cpu. So if that cpu is busy
(with receiving packets, say, or normal OS tasks) we cannot execute,
and we're seeing a starvation.

With wq_unbound we are _not_ tied to a specific cpu, but rather
scheduled in a round-robin fashion. This avoids the starvation
and hence the I/O timeouts do not occur.
But we need to set the 'cpu' affinity for wq_unbound to keep
the cache locality, otherwise the performance _really_ suffers
as we're bouncing threads all over the place.

>>
>> which I get consistently during testing with the default implementation.
> 
> Hannes, let's please separate this specific issue with the performance 
> enhancements.
> I do not think that we should search for performance enhancements to 
> address what appears to be a logical starvation issue.

I am perfectly fine with that approach. This patchset is indeed just to 
address the I/O timeout issues I've been seeing.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



  parent reply	other threads:[~2024-07-04  6:43 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-03 13:50 [PATCH 0/4] nvme-tcp: improve scalability Hannes Reinecke
2024-07-03 13:50 ` [PATCH 1/4] nvme-tcp: per-controller I/O workqueues Hannes Reinecke
2024-07-03 14:11   ` Sagi Grimberg
2024-07-03 14:46     ` Hannes Reinecke
2024-07-03 15:16       ` Sagi Grimberg
2024-07-03 17:07         ` Tejun Heo
2024-07-03 19:14           ` Sagi Grimberg
2024-07-03 19:17             ` Tejun Heo
2024-07-03 19:41               ` Sagi Grimberg
2024-07-04  7:36               ` Hannes Reinecke
2024-07-05  7:10                 ` Christoph Hellwig
2024-07-05  8:11                   ` Hannes Reinecke
2024-07-05  8:16                     ` Jens Axboe
2024-07-04  5:36   ` Christoph Hellwig
2024-07-03 13:50 ` [PATCH 2/4] nvme-tcp: align I/O cpu with blk-mq mapping Hannes Reinecke
2024-07-03 14:19   ` Sagi Grimberg
2024-07-03 14:53     ` Hannes Reinecke
2024-07-03 15:03       ` Sagi Grimberg
2024-07-03 15:40         ` Hannes Reinecke
2024-07-03 19:38           ` Sagi Grimberg
2024-07-03 19:47             ` Sagi Grimberg
2024-07-04  6:43             ` Hannes Reinecke [this message]
2024-07-04  9:07               ` Sagi Grimberg
2024-07-04 14:03                 ` Hannes Reinecke
2024-07-04  5:37     ` Christoph Hellwig
2024-07-04  9:13       ` Sagi Grimberg
2024-07-03 13:50 ` [PATCH 3/4] workqueue: introduce helper workqueue_unbound_affinity_scope() Hannes Reinecke
2024-07-03 17:31   ` Tejun Heo
2024-07-04  6:04     ` Hannes Reinecke
2024-07-03 13:50 ` [PATCH 4/4] nvme-tcp: switch to 'cpu' affinity scope for unbound workqueues Hannes Reinecke
2024-07-03 14:22   ` Sagi Grimberg
2024-07-03 15:01     ` Hannes Reinecke
2024-07-03 15:09       ` Sagi Grimberg
2024-07-03 15:50         ` Hannes Reinecke
2024-07-04  9:11           ` Sagi Grimberg
2024-07-04 15:54             ` Hannes Reinecke
2024-07-05 11:48               ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2ea2706d-65a1-419e-aa96-6ca353d954e0@suse.de \
    --to=hare@suse.de \
    --cc=hare@kernel.org \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox