* [RFC / RFT net 1/7] tls: rx: device: fix checking decryption status
2023-05-11 1:20 [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Jakub Kicinski
@ 2023-05-11 1:20 ` Jakub Kicinski
2023-05-11 1:20 ` [RFC / RFT net 2/7] tls: rx: strp: set the skb->len of detached / CoW'ed skbs Jakub Kicinski
` (6 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2023-05-11 1:20 UTC (permalink / raw)
To: tariqt; +Cc: netdev, Jakub Kicinski
skb->len covers the entire skb, including the frag_list.
In fact we're guaranteed that rxm->full_len <= skb->len,
so since the change under Fixes we were not checking decrypt
status of any skb but the first.
Note that the skb_pagelen() added here may feel a bit costly,
but it's removed by subsequent fixes, anyway.
Reported-by: Tariq Toukan <tariqt@nvidia.com>
Fixes: 86b259f6f888 ("tls: rx: device: bound the frag walk")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
net/tls/tls_device.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index a7cc4f9faac2..3b87c7b04ac8 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -1012,7 +1012,7 @@ int tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx)
struct sk_buff *skb_iter;
int left;
- left = rxm->full_len - skb->len;
+ left = rxm->full_len + rxm->offset - skb_pagelen(skb);
/* Check if all the data is decrypted already */
skb_iter = skb_shinfo(skb)->frag_list;
while (skb_iter && left > 0) {
--
2.40.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* [RFC / RFT net 2/7] tls: rx: strp: set the skb->len of detached / CoW'ed skbs
2023-05-11 1:20 [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Jakub Kicinski
2023-05-11 1:20 ` [RFC / RFT net 1/7] tls: rx: device: fix checking decryption status Jakub Kicinski
@ 2023-05-11 1:20 ` Jakub Kicinski
2023-05-11 13:45 ` Simon Horman
2023-05-11 1:20 ` [RFC / RFT net 3/7] tls: rx: strp: force mixed decrypted records into copy mode Jakub Kicinski
` (5 subsequent siblings)
7 siblings, 1 reply; 12+ messages in thread
From: Jakub Kicinski @ 2023-05-11 1:20 UTC (permalink / raw)
To: tariqt; +Cc: netdev, Jakub Kicinski
alloc_skb_with_frags() fills in page frag sizes but does not
set skb->len and skb->data_len. Set those correctly otherwise
device offload will most likely generate an empty skb and
hit the BUG() at the end of __skb_nsg().
Fixes: 84c61fe1a75b ("tls: rx: do not use the standard strparser")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
net/tls/tls_strp.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
index 955ac3e0bf4d..90b220d1145c 100644
--- a/net/tls/tls_strp.c
+++ b/net/tls/tls_strp.c
@@ -56,6 +56,8 @@ static struct sk_buff *tls_strp_msg_make_copy(struct tls_strparser *strp)
offset += skb_frag_size(frag);
}
+ skb->len = len;
+ skb->data_len = len;
skb_copy_header(skb, strp->anchor);
rxm = strp_msg(skb);
rxm->offset = 0;
--
2.40.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [RFC / RFT net 2/7] tls: rx: strp: set the skb->len of detached / CoW'ed skbs
2023-05-11 1:20 ` [RFC / RFT net 2/7] tls: rx: strp: set the skb->len of detached / CoW'ed skbs Jakub Kicinski
@ 2023-05-11 13:45 ` Simon Horman
0 siblings, 0 replies; 12+ messages in thread
From: Simon Horman @ 2023-05-11 13:45 UTC (permalink / raw)
To: Jakub Kicinski; +Cc: tariqt, netdev
On Wed, May 10, 2023 at 06:20:29PM -0700, Jakub Kicinski wrote:
> alloc_skb_with_frags() fills in page frag sizes but does not
> set skb->len and skb->data_len. Set those correctly otherwise
> device offload will most likely generate an empty skb and
> hit the BUG() at the end of __skb_nsg().
>
> Fixes: 84c61fe1a75b ("tls: rx: do not use the standard strparser")
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> ---
> net/tls/tls_strp.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
> index 955ac3e0bf4d..90b220d1145c 100644
> --- a/net/tls/tls_strp.c
> +++ b/net/tls/tls_strp.c
> @@ -56,6 +56,8 @@ static struct sk_buff *tls_strp_msg_make_copy(struct tls_strparser *strp)
> offset += skb_frag_size(frag);
> }
>
> + skb->len = len;
> + skb->data_len = len;
len doesn't seem to exist until patch 5/7.
> skb_copy_header(skb, strp->anchor);
> rxm = strp_msg(skb);
> rxm->offset = 0;
> --
> 2.40.1
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC / RFT net 3/7] tls: rx: strp: force mixed decrypted records into copy mode
2023-05-11 1:20 [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Jakub Kicinski
2023-05-11 1:20 ` [RFC / RFT net 1/7] tls: rx: device: fix checking decryption status Jakub Kicinski
2023-05-11 1:20 ` [RFC / RFT net 2/7] tls: rx: strp: set the skb->len of detached / CoW'ed skbs Jakub Kicinski
@ 2023-05-11 1:20 ` Jakub Kicinski
2023-05-11 1:20 ` [RFC / RFT net 4/7] tls: rx: strp: fix determining record length in " Jakub Kicinski
` (4 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2023-05-11 1:20 UTC (permalink / raw)
To: tariqt; +Cc: netdev, Jakub Kicinski
If a record is partially decrypted we'll have to CoW it, anyway,
so go into copy mode and allocate a writable skb right away.
This will make subsequent fix simpler because we won't have to
teach tls_strp_msg_make_copy() how to copy skbs while preserving
decrypt status.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
include/linux/skbuff.h | 10 ++++++++++
net/tls/tls_strp.c | 16 +++++++++++-----
2 files changed, 21 insertions(+), 5 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 738776ab8838..0b40417457cd 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1587,6 +1587,16 @@ static inline void skb_copy_hash(struct sk_buff *to, const struct sk_buff *from)
to->l4_hash = from->l4_hash;
};
+static inline int skb_cmp_decrypted(const struct sk_buff *skb1,
+ const struct sk_buff *skb2)
+{
+#ifdef CONFIG_TLS_DEVICE
+ return skb2->decrypted - skb1->decrypted;
+#else
+ return 0;
+#endif
+}
+
static inline void skb_copy_decrypted(struct sk_buff *to,
const struct sk_buff *from)
{
diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
index 90b220d1145c..9a125c28da88 100644
--- a/net/tls/tls_strp.c
+++ b/net/tls/tls_strp.c
@@ -317,15 +317,19 @@ static int tls_strp_read_copy(struct tls_strparser *strp, bool qshort)
return 0;
}
-static bool tls_strp_check_no_dup(struct tls_strparser *strp)
+static bool tls_strp_check_queue_ok(struct tls_strparser *strp)
{
unsigned int len = strp->stm.offset + strp->stm.full_len;
- struct sk_buff *skb;
+ struct sk_buff *first, *skb;
u32 seq;
- skb = skb_shinfo(strp->anchor)->frag_list;
- seq = TCP_SKB_CB(skb)->seq;
+ first = skb_shinfo(strp->anchor)->frag_list;
+ skb = first;
+ seq = TCP_SKB_CB(first)->seq;
+ /* Make sure there's no duplicate data in the queue,
+ * and the decrypted status matches.
+ */
while (skb->len < len) {
seq += skb->len;
len -= skb->len;
@@ -333,6 +337,8 @@ static bool tls_strp_check_no_dup(struct tls_strparser *strp)
if (TCP_SKB_CB(skb)->seq != seq)
return false;
+ if (skb_cmp_decrypted(first, skb))
+ return false;
}
return true;
@@ -413,7 +419,7 @@ static int tls_strp_read_sock(struct tls_strparser *strp)
return tls_strp_read_copy(strp, true);
}
- if (!tls_strp_check_no_dup(strp))
+ if (!tls_strp_check_queue_ok(strp))
return tls_strp_read_copy(strp, false);
strp->msg_ready = 1;
--
2.40.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* [RFC / RFT net 4/7] tls: rx: strp: fix determining record length in copy mode
2023-05-11 1:20 [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Jakub Kicinski
` (2 preceding siblings ...)
2023-05-11 1:20 ` [RFC / RFT net 3/7] tls: rx: strp: force mixed decrypted records into copy mode Jakub Kicinski
@ 2023-05-11 1:20 ` Jakub Kicinski
2023-05-11 1:20 ` [RFC / RFT net 5/7] tls: rx: strp: factor out copying skb data Jakub Kicinski
` (3 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2023-05-11 1:20 UTC (permalink / raw)
To: tariqt; +Cc: netdev, Jakub Kicinski
We call tls_rx_msg_size(skb) before doing skb->len += chunk.
So the tls_rx_msg_size() code will see old skb->len, most
likely leading to an over-read.
Worst case we will over read an entire record, next iteration
will try to trim the skb but may end up turning frag len negative
or discarding the subsequent record (since we already told TCP
we've read it during previous read but now we'll trim it out of
the skb).
Fixes: 84c61fe1a75b ("tls: rx: do not use the standard strparser")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
net/tls/tls_strp.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
index 9a125c28da88..7b0c8145ace6 100644
--- a/net/tls/tls_strp.c
+++ b/net/tls/tls_strp.c
@@ -210,19 +210,28 @@ static int tls_strp_copyin(read_descriptor_t *desc, struct sk_buff *in_skb,
skb_frag_size(frag),
chunk));
- sz = tls_rx_msg_size(strp, strp->anchor);
+ skb->len += chunk;
+ skb->data_len += chunk;
+ skb_frag_size_add(frag, chunk);
+
+ sz = tls_rx_msg_size(strp, skb);
if (sz < 0) {
desc->error = sz;
return 0;
}
/* We may have over-read, sz == 0 is guaranteed under-read */
- if (sz > 0)
- chunk = min_t(size_t, chunk, sz - skb->len);
+ if (unlikely(sz && sz < skb->len)) {
+ int over = skb->len - sz;
+
+ WARN_ON_ONCE(over > chunk);
+ skb->len -= over;
+ skb->data_len -= over;
+ skb_frag_size_add(frag, -over);
+
+ chunk -= over;
+ }
- skb->len += chunk;
- skb->data_len += chunk;
- skb_frag_size_add(frag, chunk);
frag++;
len -= chunk;
offset += chunk;
--
2.40.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* [RFC / RFT net 5/7] tls: rx: strp: factor out copying skb data
2023-05-11 1:20 [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Jakub Kicinski
` (3 preceding siblings ...)
2023-05-11 1:20 ` [RFC / RFT net 4/7] tls: rx: strp: fix determining record length in " Jakub Kicinski
@ 2023-05-11 1:20 ` Jakub Kicinski
2023-05-11 1:20 ` [RFC / RFT net 6/7] tls: rx: strp: preserve decryption status of skbs when needed Jakub Kicinski
` (2 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2023-05-11 1:20 UTC (permalink / raw)
To: tariqt; +Cc: netdev, Jakub Kicinski
We'll need to copy input skbs individually in the next patch.
Factor that code out (without assuming we're copying a full record).
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
net/tls/tls_strp.c | 29 +++++++++++++++++++++--------
1 file changed, 21 insertions(+), 8 deletions(-)
diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
index 7b0c8145ace6..61fbf84baf9e 100644
--- a/net/tls/tls_strp.c
+++ b/net/tls/tls_strp.c
@@ -34,23 +34,22 @@ static void tls_strp_anchor_free(struct tls_strparser *strp)
strp->anchor = NULL;
}
-/* Create a new skb with the contents of input copied to its page frags */
-static struct sk_buff *tls_strp_msg_make_copy(struct tls_strparser *strp)
+static struct sk_buff *
+tls_strp_skb_copy(struct tls_strparser *strp, struct sk_buff *in_skb,
+ int offset, int len)
{
- struct strp_msg *rxm;
struct sk_buff *skb;
- int i, err, offset;
+ int i, err;
- skb = alloc_skb_with_frags(0, strp->stm.full_len, TLS_PAGE_ORDER,
+ skb = alloc_skb_with_frags(0, len, TLS_PAGE_ORDER,
&err, strp->sk->sk_allocation);
if (!skb)
return NULL;
- offset = strp->stm.offset;
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- WARN_ON_ONCE(skb_copy_bits(strp->anchor, offset,
+ WARN_ON_ONCE(skb_copy_bits(in_skb, offset,
skb_frag_address(frag),
skb_frag_size(frag)));
offset += skb_frag_size(frag);
@@ -58,7 +57,21 @@ static struct sk_buff *tls_strp_msg_make_copy(struct tls_strparser *strp)
skb->len = len;
skb->data_len = len;
- skb_copy_header(skb, strp->anchor);
+ skb_copy_header(skb, in_skb);
+ return skb;
+}
+
+/* Create a new skb with the contents of input copied to its page frags */
+static struct sk_buff *tls_strp_msg_make_copy(struct tls_strparser *strp)
+{
+ struct strp_msg *rxm;
+ struct sk_buff *skb;
+
+ skb = tls_strp_skb_copy(strp, strp->anchor, strp->stm.offset,
+ strp->stm.full_len);
+ if (!skb)
+ return NULL;
+
rxm = strp_msg(skb);
rxm->offset = 0;
return skb;
--
2.40.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* [RFC / RFT net 6/7] tls: rx: strp: preserve decryption status of skbs when needed
2023-05-11 1:20 [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Jakub Kicinski
` (4 preceding siblings ...)
2023-05-11 1:20 ` [RFC / RFT net 5/7] tls: rx: strp: factor out copying skb data Jakub Kicinski
@ 2023-05-11 1:20 ` Jakub Kicinski
2023-05-11 1:20 ` [RFC / RFT net 7/7] tls: rx: strp: don't use GFP_KERNEL in softirq context Jakub Kicinski
2023-05-11 10:17 ` [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Tariq Toukan
7 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2023-05-11 1:20 UTC (permalink / raw)
To: tariqt; +Cc: netdev, Jakub Kicinski
When receive buffer is small we try to copy out the data from
TCP into a skb maintained by TLS to prevent connection from
stalling. Unfortunately if a single record is made up of a mix
of decrypted and non-decrypted skbs combining them into a single
skb leads to loss of decryption status, resulting in decryption
errors or data corruption.
Similarly when trying to use TCP receive queue directly we need
to make sure that all the skbs within the record have the same
status. If we don't the mixed status will be detected correctly
but we'll CoW the anchor, again collapsing it into a single paged
skb without decrypted status preserved. So the "fixup" code will
not know which parts of skb to re-encrypt.
Fixes: 84c61fe1a75b ("tls: rx: do not use the standard strparser")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
include/net/tls.h | 1 +
net/tls/tls.h | 5 ++
net/tls/tls_device.c | 22 +++-----
net/tls/tls_strp.c | 117 ++++++++++++++++++++++++++++++++++++-------
4 files changed, 114 insertions(+), 31 deletions(-)
diff --git a/include/net/tls.h b/include/net/tls.h
index 6056ce5a2aa5..596595c4b1af 100644
--- a/include/net/tls.h
+++ b/include/net/tls.h
@@ -126,6 +126,7 @@ struct tls_strparser {
u32 mark : 8;
u32 stopped : 1;
u32 copy_mode : 1;
+ u32 mixed_decrypted : 1;
u32 msg_ready : 1;
struct strp_msg stm;
diff --git a/net/tls/tls.h b/net/tls/tls.h
index 804c3880d028..0672acab2773 100644
--- a/net/tls/tls.h
+++ b/net/tls/tls.h
@@ -167,6 +167,11 @@ static inline bool tls_strp_msg_ready(struct tls_sw_context_rx *ctx)
return ctx->strp.msg_ready;
}
+static inline bool tls_strp_msg_mixed_decrypted(struct tls_sw_context_rx *ctx)
+{
+ return ctx->strp.mixed_decrypted;
+}
+
#ifdef CONFIG_TLS_DEVICE
int tls_device_init(void);
void tls_device_cleanup(void);
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 3b87c7b04ac8..bf69c9d6d06c 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -1007,20 +1007,14 @@ int tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx)
struct tls_sw_context_rx *sw_ctx = tls_sw_ctx_rx(tls_ctx);
struct sk_buff *skb = tls_strp_msg(sw_ctx);
struct strp_msg *rxm = strp_msg(skb);
- int is_decrypted = skb->decrypted;
- int is_encrypted = !is_decrypted;
- struct sk_buff *skb_iter;
- int left;
-
- left = rxm->full_len + rxm->offset - skb_pagelen(skb);
- /* Check if all the data is decrypted already */
- skb_iter = skb_shinfo(skb)->frag_list;
- while (skb_iter && left > 0) {
- is_decrypted &= skb_iter->decrypted;
- is_encrypted &= !skb_iter->decrypted;
-
- left -= skb_iter->len;
- skb_iter = skb_iter->next;
+ int is_decrypted, is_encrypted;
+
+ if (!tls_strp_msg_mixed_decrypted(sw_ctx)) {
+ is_decrypted = skb->decrypted;
+ is_encrypted = !is_decrypted;
+ } else {
+ is_decrypted = 0;
+ is_encrypted = 0;
}
trace_tls_device_decrypted(sk, tcp_sk(sk)->copied_seq - rxm->full_len,
diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c
index 61fbf84baf9e..da95abbb7ea3 100644
--- a/net/tls/tls_strp.c
+++ b/net/tls/tls_strp.c
@@ -29,7 +29,8 @@ static void tls_strp_anchor_free(struct tls_strparser *strp)
struct skb_shared_info *shinfo = skb_shinfo(strp->anchor);
DEBUG_NET_WARN_ON_ONCE(atomic_read(&shinfo->dataref) != 1);
- shinfo->frag_list = NULL;
+ if (!strp->copy_mode)
+ shinfo->frag_list = NULL;
consume_skb(strp->anchor);
strp->anchor = NULL;
}
@@ -195,22 +196,22 @@ static void tls_strp_flush_anchor_copy(struct tls_strparser *strp)
for (i = 0; i < shinfo->nr_frags; i++)
__skb_frag_unref(&shinfo->frags[i], false);
shinfo->nr_frags = 0;
+ if (strp->copy_mode) {
+ kfree_skb_list(shinfo->frag_list);
+ shinfo->frag_list = NULL;
+ }
strp->copy_mode = 0;
+ strp->mixed_decrypted = 0;
}
-static int tls_strp_copyin(read_descriptor_t *desc, struct sk_buff *in_skb,
- unsigned int offset, size_t in_len)
+static int tls_strp_copyin_frag(struct tls_strparser *strp, struct sk_buff *skb,
+ struct sk_buff *in_skb, unsigned int offset,
+ size_t in_len)
{
- struct tls_strparser *strp = (struct tls_strparser *)desc->arg.data;
- struct sk_buff *skb;
- skb_frag_t *frag;
size_t len, chunk;
+ skb_frag_t *frag;
int sz;
- if (strp->msg_ready)
- return 0;
-
- skb = strp->anchor;
frag = &skb_shinfo(skb)->frags[skb->len / PAGE_SIZE];
len = in_len;
@@ -228,10 +229,8 @@ static int tls_strp_copyin(read_descriptor_t *desc, struct sk_buff *in_skb,
skb_frag_size_add(frag, chunk);
sz = tls_rx_msg_size(strp, skb);
- if (sz < 0) {
- desc->error = sz;
- return 0;
- }
+ if (sz < 0)
+ return sz;
/* We may have over-read, sz == 0 is guaranteed under-read */
if (unlikely(sz && sz < skb->len)) {
@@ -271,15 +270,99 @@ static int tls_strp_copyin(read_descriptor_t *desc, struct sk_buff *in_skb,
offset += chunk;
}
- if (strp->stm.full_len == skb->len) {
+read_done:
+ return in_len - len;
+}
+
+static int tls_strp_copyin_skb(struct tls_strparser *strp, struct sk_buff *skb,
+ struct sk_buff *in_skb, unsigned int offset,
+ size_t in_len)
+{
+ struct sk_buff *nskb, *first, *last;
+ struct skb_shared_info *shinfo;
+ size_t chunk;
+ int sz;
+
+ if (strp->stm.full_len)
+ chunk = strp->stm.full_len - skb->len;
+ else
+ chunk = TLS_MAX_PAYLOAD_SIZE + PAGE_SIZE;
+ chunk = min(chunk, in_len);
+
+ nskb = tls_strp_skb_copy(strp, in_skb, offset, chunk);
+ if (!nskb)
+ return -ENOMEM;
+
+ shinfo = skb_shinfo(skb);
+ if (!shinfo->frag_list) {
+ shinfo->frag_list = nskb;
+ nskb->prev = nskb;
+ } else {
+ first = shinfo->frag_list;
+ last = first->prev;
+ last->next = nskb;
+ first->prev = nskb;
+ }
+
+ skb->len += chunk;
+ skb->data_len += chunk;
+
+ if (!strp->stm.full_len) {
+ sz = tls_rx_msg_size(strp, skb);
+ if (sz < 0)
+ return sz;
+
+ /* We may have over-read, sz == 0 is guaranteed under-read */
+ if (unlikely(sz && sz < skb->len)) {
+ int over = skb->len - sz;
+
+ WARN_ON_ONCE(over > chunk);
+ skb->len -= over;
+ skb->data_len -= over;
+ __pskb_trim(nskb, nskb->len - over);
+
+ chunk -= over;
+ }
+
+ strp->stm.full_len = sz;
+ }
+
+ return chunk;
+}
+
+static int tls_strp_copyin(read_descriptor_t *desc, struct sk_buff *in_skb,
+ unsigned int offset, size_t in_len)
+{
+ struct tls_strparser *strp = (struct tls_strparser *)desc->arg.data;
+ struct sk_buff *skb;
+ int ret;
+
+ if (strp->msg_ready)
+ return 0;
+
+ skb = strp->anchor;
+ if (!skb->len)
+ skb_copy_decrypted(skb, in_skb);
+ else
+ strp->mixed_decrypted |= !!skb_cmp_decrypted(skb, in_skb);
+
+ if (IS_ENABLED(CONFIG_TLS_DEVICE) && strp->mixed_decrypted)
+ ret = tls_strp_copyin_skb(strp, skb, in_skb, offset, in_len);
+ else
+ ret = tls_strp_copyin_frag(strp, skb, in_skb, offset, in_len);
+ if (ret < 0) {
+ desc->error = ret;
+ ret = 0;
+ }
+
+ if (strp->stm.full_len && strp->stm.full_len == skb->len) {
desc->count = 0;
strp->msg_ready = 1;
tls_rx_msg_ready(strp);
}
-read_done:
- return in_len - len;
+ return ret;
}
static int tls_strp_read_copyin(struct tls_strparser *strp)
--
2.40.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* [RFC / RFT net 7/7] tls: rx: strp: don't use GFP_KERNEL in softirq context
2023-05-11 1:20 [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Jakub Kicinski
` (5 preceding siblings ...)
2023-05-11 1:20 ` [RFC / RFT net 6/7] tls: rx: strp: preserve decryption status of skbs when needed Jakub Kicinski
@ 2023-05-11 1:20 ` Jakub Kicinski
2023-05-11 10:17 ` [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Tariq Toukan
7 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2023-05-11 1:20 UTC (permalink / raw)
To: tariqt; +Cc: netdev, Jakub Kicinski
When receive buffer is small, or the TCP rx queue looks too
complicated to bother using it directly - we allocate a new
skb and copy data into it.
We already use sk->sk_allocation... but nothing actually
sets it to GFP_ATOMIC on the ->sk_data_ready() path.
Users of HW offload are far more likely to experience problems
due to scheduling while atomic. "Copy mode" is very rarely
triggered with SW crypto.
Fixes: 84c61fe1a75b ("tls: rx: do not use the standard strparser")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
net/tls/tls_sw.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 635b8bf6b937..6e6a7c37d685 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -2304,10 +2304,14 @@ static void tls_data_ready(struct sock *sk)
struct tls_context *tls_ctx = tls_get_ctx(sk);
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
struct sk_psock *psock;
+ gfp_t alloc_save;
trace_sk_data_ready(sk);
+ alloc_save = sk->sk_allocation;
+ sk->sk_allocation = GFP_ATOMIC;
tls_strp_data_ready(&ctx->strp);
+ sk->sk_allocation = alloc_save;
psock = sk_psock_get(sk);
if (psock) {
--
2.40.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload
2023-05-11 1:20 [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Jakub Kicinski
` (6 preceding siblings ...)
2023-05-11 1:20 ` [RFC / RFT net 7/7] tls: rx: strp: don't use GFP_KERNEL in softirq context Jakub Kicinski
@ 2023-05-11 10:17 ` Tariq Toukan
2023-05-16 12:27 ` Tariq Toukan
7 siblings, 1 reply; 12+ messages in thread
From: Tariq Toukan @ 2023-05-11 10:17 UTC (permalink / raw)
To: Jakub Kicinski; +Cc: netdev, Tariq Toukan, drort, samiram, Gal Pressman
On 11/05/2023 4:20, Jakub Kicinski wrote:
> Tariq, here are the fixes for the bug you reported.
> I managed to test with mlx5 (and selftest, obviously).
> I hacked things up for testing to trigger the copy and
> reencrypt paths.
>
> Could you run it thru your tests and LMK if there are
> any more regressions?
>
Hi Jakub,
Thanks for your patches!
I see that several changes were needed.
I tested your series with the repro I had, it seems to be resolved.
We are going to run more intensive and comprehensive tests during the
weekend, and we'll update on status on Sunday/Monday.
Thanks for the effort!
Regards,
Tariq
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload
2023-05-11 10:17 ` [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload Tariq Toukan
@ 2023-05-16 12:27 ` Tariq Toukan
2023-05-17 1:52 ` Jakub Kicinski
0 siblings, 1 reply; 12+ messages in thread
From: Tariq Toukan @ 2023-05-16 12:27 UTC (permalink / raw)
To: Tariq Toukan, Jakub Kicinski; +Cc: netdev, drort, samiram, Gal Pressman
On 11/05/2023 13:17, Tariq Toukan wrote:
>
>
> On 11/05/2023 4:20, Jakub Kicinski wrote:
>> Tariq, here are the fixes for the bug you reported.
>> I managed to test with mlx5 (and selftest, obviously).
>> I hacked things up for testing to trigger the copy and
>> reencrypt paths.
>>
>> Could you run it thru your tests and LMK if there are
>> any more regressions?
>>
>
> Hi Jakub,
>
> Thanks for your patches!
> I see that several changes were needed.
>
> I tested your series with the repro I had, it seems to be resolved.
>
> We are going to run more intensive and comprehensive tests during the
> weekend, and we'll update on status on Sunday/Monday.
>
Hi Jakub,
Here's an updated testing status:
1. Reported issue is resolved.
2. All device-offload TLS RX/TX tests passed, except for the one issue
below.
Nothing indicates that this issue is new or related directly to your
fixes series. It might have been there for some time, hiding behind the
existing bugs.
Issue description:
TlsDecryptError / TlsEncryptError increase when simultaneously creating
a bond interface.
It doesn't happen each and every time. It reproduced several times in
different runs.
The strange part is that the bond is created and attached to a different
interface, not the one running the TLS traffic!
I think we should progress with the fixes:
Tested-by: Shai Amiram <samiram@nvidia.com>
Regards,
Tariq
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC / RFT net 0/7] tls: rx: strp: fix inline crypto offload
2023-05-16 12:27 ` Tariq Toukan
@ 2023-05-17 1:52 ` Jakub Kicinski
0 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2023-05-17 1:52 UTC (permalink / raw)
To: Tariq Toukan; +Cc: Tariq Toukan, netdev, drort, samiram, Gal Pressman
On Tue, 16 May 2023 15:27:51 +0300 Tariq Toukan wrote:
> Here's an updated testing status:
>
> 1. Reported issue is resolved.
> 2. All device-offload TLS RX/TX tests passed, except for the one issue
> below.
>
> Nothing indicates that this issue is new or related directly to your
> fixes series. It might have been there for some time, hiding behind the
> existing bugs.
>
> Issue description:
> TlsDecryptError / TlsEncryptError increase when simultaneously creating
> a bond interface.
> It doesn't happen each and every time. It reproduced several times in
> different runs.
> The strange part is that the bond is created and attached to a different
> interface, not the one running the TLS traffic!
Hm, that's pretty odd / scary. LMK if you hit a dead end, I hope
it's not a memory corruption :S
> I think we should progress with the fixes:
> Tested-by: Shai Amiram <samiram@nvidia.com>
Thanks!
^ permalink raw reply [flat|nested] 12+ messages in thread