* Re: nvme-tls and TCP window full
[not found] ` <6a9e0fbf-ca1a-aadd-e79a-c70ecd14bc28@grimberg.me>
@ 2023-07-13 9:48 ` Hannes Reinecke
2023-07-13 10:11 ` Sagi Grimberg
0 siblings, 1 reply; 6+ messages in thread
From: Hannes Reinecke @ 2023-07-13 9:48 UTC (permalink / raw)
To: Sagi Grimberg, linux-nvme@lists.infradead.org, Jakub Kicinski,
open list:NETWORKING [GENERAL]
On 7/11/23 14:05, Sagi Grimberg wrote:
>
>>> Hey Hannes,
>>>
>>> Any progress on this one?
>>>
>> Oh well; slow going.
[ .. ]
>> Maybe the server doesn't retire skbs (or not all of them), causing the
>> TCP window to shrink.
>> That, of course, is wild guessing, as I have no idea if and how calls
>> to 'consume_skb' reflect back to the TCP window size.
>
> skbs are unrelated to the TCP window. They relate to the socket send
> buffer. skbs left dangling would cause server side to run out of memory,
> not for the TCP window to close. The two are completely unrelated.
Ouch.
Wasn't me, in the end:
diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
index f37f4a0fcd3c..ca1e0e198ceb 100644
--- a/net/tls/tls_strp.c
+++ b/net/tls/tls_strp.c
@@ -369,7 +369,6 @@ static int tls_strp_copyin(read_descriptor_t *desc,
struct sk_buff *in_skb,
static int tls_strp_read_copyin(struct tls_strparser *strp)
{
- struct socket *sock = strp->sk->sk_socket;
read_descriptor_t desc;
desc.arg.data = strp;
@@ -377,7 +376,7 @@ static int tls_strp_read_copyin(struct tls_strparser
*strp)
desc.count = 1; /* give more than one skb per call */
/* sk should be locked here, so okay to do read_sock */
- sock->ops->read_sock(strp->sk, &desc, tls_strp_copyin);
+ tcp_read_sock(strp->sk, &desc, tls_strp_copyin);
return desc.error;
}
Otherwise we'd enter a recursion calling ->read_sock(), which will
redirect to tls_sw_read_sock(), calling tls_strp_check_rcv(), calling
->read_sock() ...
It got covered up with the tls_rx_reader_lock() Jakub put in, so I
really only noticed it when instrumenting that one.
And my reading seems that the current in-kernel TLS implementation
assumes TCP as the underlying transport anyway, so no harm done.
Jakub?
Cheers,
Hannes
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: nvme-tls and TCP window full
2023-07-13 9:48 ` nvme-tls and TCP window full Hannes Reinecke
@ 2023-07-13 10:11 ` Sagi Grimberg
2023-07-13 10:16 ` Hannes Reinecke
0 siblings, 1 reply; 6+ messages in thread
From: Sagi Grimberg @ 2023-07-13 10:11 UTC (permalink / raw)
To: Hannes Reinecke, linux-nvme@lists.infradead.org, Jakub Kicinski,
open list:NETWORKING [GENERAL]
>> skbs are unrelated to the TCP window. They relate to the socket send
>> buffer. skbs left dangling would cause server side to run out of memory,
>> not for the TCP window to close. The two are completely unrelated.
>
> Ouch.
> Wasn't me, in the end:
>
> diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
> index f37f4a0fcd3c..ca1e0e198ceb 100644
> --- a/net/tls/tls_strp.c
> +++ b/net/tls/tls_strp.c
> @@ -369,7 +369,6 @@ static int tls_strp_copyin(read_descriptor_t *desc,
> struct sk_buff *in_skb,
>
> static int tls_strp_read_copyin(struct tls_strparser *strp)
> {
> - struct socket *sock = strp->sk->sk_socket;
> read_descriptor_t desc;
>
> desc.arg.data = strp;
> @@ -377,7 +376,7 @@ static int tls_strp_read_copyin(struct tls_strparser
> *strp)
> desc.count = 1; /* give more than one skb per call */
>
> /* sk should be locked here, so okay to do read_sock */
> - sock->ops->read_sock(strp->sk, &desc, tls_strp_copyin);
> + tcp_read_sock(strp->sk, &desc, tls_strp_copyin);
>
> return desc.error;
> }
>
> Otherwise we'd enter a recursion calling ->read_sock(), which will
> redirect to tls_sw_read_sock(), calling tls_strp_check_rcv(), calling
> ->read_sock() ...
Is this new? How did this pop up just now?
> It got covered up with the tls_rx_reader_lock() Jakub put in, so I
> really only noticed it when instrumenting that one.
So without it, you get two contexts reading from the socket?
Not sure how this works, but obviously wrong...
> And my reading seems that the current in-kernel TLS implementation
> assumes TCP as the underlying transport anyway, so no harm done.
> Jakub?
While it is correct that the assumption for tcp only, I think the
right thing to do would be to store the original read_sock and call
that...
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: nvme-tls and TCP window full
2023-07-13 10:11 ` Sagi Grimberg
@ 2023-07-13 10:16 ` Hannes Reinecke
2023-07-18 18:59 ` Jakub Kicinski
0 siblings, 1 reply; 6+ messages in thread
From: Hannes Reinecke @ 2023-07-13 10:16 UTC (permalink / raw)
To: Sagi Grimberg, linux-nvme@lists.infradead.org, Jakub Kicinski,
open list:NETWORKING [GENERAL]
On 7/13/23 12:11, Sagi Grimberg wrote:
>
>>> skbs are unrelated to the TCP window. They relate to the socket send
>>> buffer. skbs left dangling would cause server side to run out of memory,
>>> not for the TCP window to close. The two are completely unrelated.
>>
>> Ouch.
>> Wasn't me, in the end:
>>
>> diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
>> index f37f4a0fcd3c..ca1e0e198ceb 100644
>> --- a/net/tls/tls_strp.c
>> +++ b/net/tls/tls_strp.c
>> @@ -369,7 +369,6 @@ static int tls_strp_copyin(read_descriptor_t
>> *desc, struct sk_buff *in_skb,
>>
>> static int tls_strp_read_copyin(struct tls_strparser *strp)
>> {
>> - struct socket *sock = strp->sk->sk_socket;
>> read_descriptor_t desc;
>>
>> desc.arg.data = strp;
>> @@ -377,7 +376,7 @@ static int tls_strp_read_copyin(struct
>> tls_strparser *strp)
>> desc.count = 1; /* give more than one skb per call */
>>
>> /* sk should be locked here, so okay to do read_sock */
>> - sock->ops->read_sock(strp->sk, &desc, tls_strp_copyin);
>> + tcp_read_sock(strp->sk, &desc, tls_strp_copyin);
>>
>> return desc.error;
>> }
>>
>> Otherwise we'd enter a recursion calling ->read_sock(), which will
>> redirect to tls_sw_read_sock(), calling tls_strp_check_rcv(), calling
>> ->read_sock() ...
>
> Is this new? How did this pop up just now?
>
It's not new; this has been in there since ages immemorial.
It just got uncovered as yours truly was brave enough to implement
->read_sock() for TLS ...
>> It got covered up with the tls_rx_reader_lock() Jakub put in, so I
>> really only noticed it when instrumenting that one.
>
> So without it, you get two contexts reading from the socket?
> Not sure how this works, but obviously wrong...
>
Oh, no. Without it you get a loop, eventually resulting in a stack overflow.
>> And my reading seems that the current in-kernel TLS implementation
>> assumes TCP as the underlying transport anyway, so no harm done.
>> Jakub?
>
> While it is correct that the assumption for tcp only, I think the
> right thing to do would be to store the original read_sock and call
> that...
Ah, sure. Or that.
Cheers,
Hannes
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: nvme-tls and TCP window full
2023-07-13 10:16 ` Hannes Reinecke
@ 2023-07-18 18:59 ` Jakub Kicinski
2023-07-19 7:27 ` Hannes Reinecke
0 siblings, 1 reply; 6+ messages in thread
From: Jakub Kicinski @ 2023-07-18 18:59 UTC (permalink / raw)
To: Hannes Reinecke
Cc: Sagi Grimberg, linux-nvme@lists.infradead.org,
open list:NETWORKING [GENERAL]
On Thu, 13 Jul 2023 12:16:13 +0200 Hannes Reinecke wrote:
> >> And my reading seems that the current in-kernel TLS implementation
> >> assumes TCP as the underlying transport anyway, so no harm done.
> >> Jakub?
> >
> > While it is correct that the assumption for tcp only, I think the
> > right thing to do would be to store the original read_sock and call
> > that...
>
> Ah, sure. Or that.
Yup, sorry for late reply, read_sock could also be replaced by BPF
or some other thing, even if it's always TCP "at the bottom".
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: nvme-tls and TCP window full
2023-07-18 18:59 ` Jakub Kicinski
@ 2023-07-19 7:27 ` Hannes Reinecke
2023-07-19 11:54 ` Sagi Grimberg
0 siblings, 1 reply; 6+ messages in thread
From: Hannes Reinecke @ 2023-07-19 7:27 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Sagi Grimberg, linux-nvme@lists.infradead.org,
open list:NETWORKING [GENERAL]
On 7/18/23 20:59, Jakub Kicinski wrote:
> On Thu, 13 Jul 2023 12:16:13 +0200 Hannes Reinecke wrote:
>>>> And my reading seems that the current in-kernel TLS implementation
>>>> assumes TCP as the underlying transport anyway, so no harm done.
>>>> Jakub?
>>>
>>> While it is correct that the assumption for tcp only, I think the
>>> right thing to do would be to store the original read_sock and call
>>> that...
>>
>> Ah, sure. Or that.
>
> Yup, sorry for late reply, read_sock could also be replaced by BPF
> or some other thing, even if it's always TCP "at the bottom".
Hmm. So what do you suggest?
Remember, the current patch does this:
@@ -377,7 +376,7 @@ static int tls_strp_read_copyin(struct tls_strparser
*strp)
desc.count = 1; /* give more than one skb per call */
/* sk should be locked here, so okay to do read_sock */
- sock->ops->read_sock(strp->sk, &desc, tls_strp_copyin);
+ tcp_read_sock(strp->sk, &desc, tls_strp_copyin);
return desc.error;
}
precisely because ->read_sock() gets redirected when TLS engages.
And also remember TLS does _not_ use the normal redirection by
intercepting the callbacks from 'struct sock', but rather replaces the
->ops callback in struct socket.
So I'm slightly at a loss on how to implement a new callback without
having to redo the entire TLS handover.
Hence I vastly prefer just the simple patch by using tcp_read_sock()
directly.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: nvme-tls and TCP window full
2023-07-19 7:27 ` Hannes Reinecke
@ 2023-07-19 11:54 ` Sagi Grimberg
0 siblings, 0 replies; 6+ messages in thread
From: Sagi Grimberg @ 2023-07-19 11:54 UTC (permalink / raw)
To: Hannes Reinecke, Jakub Kicinski
Cc: linux-nvme@lists.infradead.org, open list:NETWORKING [GENERAL]
>>>>> And my reading seems that the current in-kernel TLS implementation
>>>>> assumes TCP as the underlying transport anyway, so no harm done.
>>>>> Jakub?
>>>>
>>>> While it is correct that the assumption for tcp only, I think the
>>>> right thing to do would be to store the original read_sock and call
>>>> that...
>>>
>>> Ah, sure. Or that.
>>
>> Yup, sorry for late reply, read_sock could also be replaced by BPF
>> or some other thing, even if it's always TCP "at the bottom".
>
> Hmm. So what do you suggest?
> Remember, the current patch does this:
>
> @@ -377,7 +376,7 @@ static int tls_strp_read_copyin(struct tls_strparser
> *strp)
> desc.count = 1; /* give more than one skb per call */
>
> /* sk should be locked here, so okay to do read_sock */
> - sock->ops->read_sock(strp->sk, &desc, tls_strp_copyin);
> + tcp_read_sock(strp->sk, &desc, tls_strp_copyin);
>
> return desc.error;
> }
>
> precisely because ->read_sock() gets redirected when TLS engages.
> And also remember TLS does _not_ use the normal redirection by
> intercepting the callbacks from 'struct sock', but rather replaces the
> ->ops callback in struct socket.
>
> So I'm slightly at a loss on how to implement a new callback without
> having to redo the entire TLS handover.
> Hence I vastly prefer just the simple patch by using tcp_read_sock()
> directly.
I think this is fine. The tls parser is exclusive to the bottom socket
being a tcp socket anyways, read_sock() was by definition until Hannes's
patch 6/6 always tcp_read_sock. So this is a valid replacement IMO.
I don't think that it is worth the effort to "prepare" for generalizing
the tls parser.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-07-19 11:54 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <f10a9e4a-b545-429d-803e-c1d63a084afe@suse.de>
[not found] ` <49422387-5ea3-af84-3f94-076c94748fff@grimberg.me>
[not found] ` <ed5b22c6-d862-8706-fc2e-5306ed1eaad2@grimberg.me>
[not found] ` <a50ee71b-8ee9-7636-917d-694eb2a482b4@suse.de>
[not found] ` <6a9e0fbf-ca1a-aadd-e79a-c70ecd14bc28@grimberg.me>
2023-07-13 9:48 ` nvme-tls and TCP window full Hannes Reinecke
2023-07-13 10:11 ` Sagi Grimberg
2023-07-13 10:16 ` Hannes Reinecke
2023-07-18 18:59 ` Jakub Kicinski
2023-07-19 7:27 ` Hannes Reinecke
2023-07-19 11:54 ` Sagi Grimberg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).