Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Sagi Grimberg <sagi@grimberg.me>,
	Hannes Reinecke <hare@kernel.org>, Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <kbusch@kernel.org>, linux-nvme@lists.infradead.org
Subject: Re: [PATCH 1/3] nvme-tcp: improve rx/tx fairness
Date: Tue, 9 Jul 2024 08:51:52 +0200	[thread overview]
Message-ID: <7b5006e7-9df3-4ee3-a2e9-0ce091c21434@suse.de> (raw)
In-Reply-To: <8013f402-4831-46c3-b207-c08821c15760@grimberg.me>

On 7/8/24 21:31, Sagi Grimberg wrote:
> 
> 
> On 08/07/2024 18:50, Hannes Reinecke wrote:
[ .. ]
>>
>> Weellll ... if 'SOCK_NOSPACE' is for blocking sockets, why do we even 
>> get the 'write_space()' callback?
>> It gets triggered quite often, and checking for the SOCK_NOSPACE bit 
>> before sending drops the number of invocations quite significantly.
> 
> I need to check, but where have you testing it? in the inline path?
> Thinking further, I do agree that because the io_work is shared, we may
> sendmsg immediately as space is becomes available instead of some 
> minimum space.
>  > Can you please quantify this with your testing?
> How many times we get write_space() callback? how many times we get 
> EAGAIN, and what
> is the perf... Also interesting if this is more apparent in specific 
> workloads. I'm assuming its
> more apparent with large write workloads.

Stats for my testing
(second row is controller 1, third row controller 2):

write_space:
0: 0 0
1: 489 2
10: 325 20
11: 28 1
12: 14 1
13: 252 1
14: 100 12
15: 310 1
16: 454 2
2: 31 11
3: 50 1
4: 299 42
5: 1 19
6: 12 0
7: 737 16
8: 636 1
9: 19 1

queue_busy:
0: 0 0
1: 396 2
10: 66 18
11: 91 8
12: 56 14
13: 464 7
14: 24 15
15: 574 8
16: 516 9
2: 22 29
3: 56 5
4: 153 99
5: 2 40
6: 18 0
7: 632 5
8: 590 13
9: 129 2

The send latency is actually pretty good, and around 150 us on 
controller 1 (with two notable exceptions of 480 us on queue 5 and 646 
us on queue 6) and around 100 us on controller 2 (again with exceptions 
of 444 us on queue 5 and 643 us on queue 6). Each queue processing 
around 150k requests.

Receive latency is far more consistent; each queue has around 150 us on 
controller 1 and 140 us on controller 2.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



  reply	other threads:[~2024-07-09  6:52 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-08  7:10 [PATCHv2 0/3] nvme-tcp: improve scalability Hannes Reinecke
2024-07-08  7:10 ` [PATCH 1/3] nvme-tcp: improve rx/tx fairness Hannes Reinecke
2024-07-08 11:57   ` Sagi Grimberg
2024-07-08 13:21     ` Hannes Reinecke
2024-07-08 14:25       ` Sagi Grimberg
2024-07-08 15:50         ` Hannes Reinecke
2024-07-08 19:31           ` Sagi Grimberg
2024-07-09  6:51             ` Hannes Reinecke [this message]
2024-07-09  7:06               ` Sagi Grimberg
2024-07-08  7:10 ` [PATCH 2/3] nvme-tcp: align I/O cpu with blk-mq mapping Hannes Reinecke
2024-07-08 12:08   ` Sagi Grimberg
2024-07-08 12:43     ` Hannes Reinecke
2024-07-08 14:38       ` Sagi Grimberg
2024-07-08  7:10 ` [PATCH 3/3] nvme-tcp: per-controller I/O workqueues Hannes Reinecke
2024-07-08 12:12   ` Sagi Grimberg
2024-07-08 12:48     ` Hannes Reinecke
2024-07-08 14:41       ` Sagi Grimberg
2024-07-10 11:56 ` [PATCHv2 0/3] nvme-tcp: improve scalability Sagi Grimberg
2024-07-10 14:06   ` Hannes Reinecke
2024-07-10 14:45     ` Sagi Grimberg
2024-07-16  6:31 ` Sagi Grimberg
2024-07-16  7:10   ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7b5006e7-9df3-4ee3-a2e9-0ce091c21434@suse.de \
    --to=hare@suse.de \
    --cc=hare@kernel.org \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox