* [PATCH net-next 0/6] tls: pad strparser, internal header, decrypt_ctx etc.
@ 2022-07-07 1:35 Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 1/6] strparser: pad sk_skb_cb to avoid straddling cachelines Jakub Kicinski
` (5 more replies)
0 siblings, 6 replies; 9+ messages in thread
From: Jakub Kicinski @ 2022-07-07 1:35 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi, tariqt,
Jakub Kicinski
A grab bag of non-functional refactoring to make the series
which will let us decrypt into a fresh skb smaller.
Patches in this series are not strictly required to get the
decryption into a fresh skb going, they are more in the "things
which had been annoying me for a while" category.
Jakub Kicinski (6):
strparser: pad sk_skb_cb to avoid straddling cachelines
tls: rx: always allocate max possible aad size for decrypt
tls: rx: wrap decrypt params in a struct
tls: rx: coalesce exit paths in tls_decrypt_sg()
tls: create an internal header
tls: rx: make tls_wait_data() return an recvmsg retcode
include/net/strparser.h | 12 +-
include/net/tls.h | 279 +-------------------------------
net/strparser/strparser.c | 3 +
net/tls/tls.h | 291 ++++++++++++++++++++++++++++++++++
net/tls/tls_device.c | 3 +-
net/tls/tls_device_fallback.c | 2 +
net/tls/tls_main.c | 23 ++-
net/tls/tls_proc.c | 2 +
net/tls/tls_sw.c | 162 ++++++++++---------
net/tls/tls_toe.c | 2 +
10 files changed, 419 insertions(+), 360 deletions(-)
create mode 100644 net/tls/tls.h
--
2.36.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH net-next 1/6] strparser: pad sk_skb_cb to avoid straddling cachelines
2022-07-07 1:35 [PATCH net-next 0/6] tls: pad strparser, internal header, decrypt_ctx etc Jakub Kicinski
@ 2022-07-07 1:35 ` Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 2/6] tls: rx: always allocate max possible aad size for decrypt Jakub Kicinski
` (4 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Jakub Kicinski @ 2022-07-07 1:35 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi, tariqt,
Jakub Kicinski
sk_skb_cb lives within skb->cb[]. skb->cb[] straddles
2 cache lines, each containing 24B of data.
The first cache line does not contain much interesting
information for users of strparser, so pad things a little.
Previously strp_msg->full_len would live in the first cache
line and strp_msg->offset in the second.
We need to reorder the 8 byte temp_reg with struct tls_msg
to prevent a 4B hole which would push the struct over 48B.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
include/net/strparser.h | 12 ++++++++----
net/strparser/strparser.c | 3 +++
2 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/include/net/strparser.h b/include/net/strparser.h
index a191486eb1e4..88900b05443e 100644
--- a/include/net/strparser.h
+++ b/include/net/strparser.h
@@ -65,15 +65,19 @@ struct _strp_msg {
struct sk_skb_cb {
#define SK_SKB_CB_PRIV_LEN 20
unsigned char data[SK_SKB_CB_PRIV_LEN];
+ /* align strp on cache line boundary within skb->cb[] */
+ unsigned char pad[4];
struct _strp_msg strp;
- /* temp_reg is a temporary register used for bpf_convert_data_end_access
- * when dst_reg == src_reg.
- */
- u64 temp_reg;
+
+ /* strp users' data follows */
struct tls_msg {
u8 control;
u8 decrypted;
} tls;
+ /* temp_reg is a temporary register used for bpf_convert_data_end_access
+ * when dst_reg == src_reg.
+ */
+ u64 temp_reg;
};
static inline struct strp_msg *strp_msg(struct sk_buff *skb)
diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c
index 1a72c67afed5..8299ceb3e373 100644
--- a/net/strparser/strparser.c
+++ b/net/strparser/strparser.c
@@ -533,6 +533,9 @@ EXPORT_SYMBOL_GPL(strp_check_rcv);
static int __init strp_dev_init(void)
{
+ BUILD_BUG_ON(sizeof(struct sk_skb_cb) >
+ sizeof_field(struct sk_buff, cb));
+
strp_wq = create_singlethread_workqueue("kstrp");
if (unlikely(!strp_wq))
return -ENOMEM;
--
2.36.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH net-next 2/6] tls: rx: always allocate max possible aad size for decrypt
2022-07-07 1:35 [PATCH net-next 0/6] tls: pad strparser, internal header, decrypt_ctx etc Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 1/6] strparser: pad sk_skb_cb to avoid straddling cachelines Jakub Kicinski
@ 2022-07-07 1:35 ` Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 3/6] tls: rx: wrap decrypt params in a struct Jakub Kicinski
` (3 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Jakub Kicinski @ 2022-07-07 1:35 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi, tariqt,
Jakub Kicinski
AAD size is either 5 or 13. Really no point complicating
the code for the 8B of difference. This will also let us
turn the chunked up buffer into a sane struct.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
include/net/tls.h | 1 +
net/tls/tls_sw.c | 19 ++++++++++---------
2 files changed, 11 insertions(+), 9 deletions(-)
diff --git a/include/net/tls.h b/include/net/tls.h
index 4fc16ca5f469..9394c0459fe8 100644
--- a/include/net/tls.h
+++ b/include/net/tls.h
@@ -66,6 +66,7 @@
#define MAX_IV_SIZE 16
#define TLS_TAG_SIZE 16
#define TLS_MAX_REC_SEQ_SIZE 8
+#define TLS_MAX_AAD_SIZE TLS_AAD_SPACE_SIZE
/* For CCM mode, the full 16-bytes of IV is made of '4' fields of given sizes.
*
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 79043bc3da39..4f6761dd8d86 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -1453,7 +1453,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv);
mem_size = aead_size + (nsg * sizeof(struct scatterlist));
- mem_size = mem_size + prot->aad_size;
+ mem_size = mem_size + TLS_MAX_AAD_SIZE;
mem_size = mem_size + MAX_IV_SIZE;
mem_size = mem_size + prot->tail_size;
@@ -1470,7 +1470,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
sgin = (struct scatterlist *)(mem + aead_size);
sgout = sgin + n_sgin;
aad = (u8 *)(sgout + n_sgout);
- iv = aad + prot->aad_size;
+ iv = aad + TLS_MAX_AAD_SIZE;
tail = iv + MAX_IV_SIZE;
/* For CCM based ciphers, first byte of nonce+iv is a constant */
@@ -2474,13 +2474,6 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
goto free_priv;
}
- /* Sanity-check the sizes for stack allocations. */
- if (iv_size > MAX_IV_SIZE || nonce_size > MAX_IV_SIZE ||
- rec_seq_size > TLS_MAX_REC_SEQ_SIZE || tag_size != TLS_TAG_SIZE) {
- rc = -EINVAL;
- goto free_priv;
- }
-
if (crypto_info->version == TLS_1_3_VERSION) {
nonce_size = 0;
prot->aad_size = TLS_HEADER_SIZE;
@@ -2490,6 +2483,14 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
prot->tail_size = 0;
}
+ /* Sanity-check the sizes for stack allocations. */
+ if (iv_size > MAX_IV_SIZE || nonce_size > MAX_IV_SIZE ||
+ rec_seq_size > TLS_MAX_REC_SEQ_SIZE || tag_size != TLS_TAG_SIZE ||
+ prot->aad_size > TLS_MAX_AAD_SIZE) {
+ rc = -EINVAL;
+ goto free_priv;
+ }
+
prot->version = crypto_info->version;
prot->cipher_type = crypto_info->cipher_type;
prot->prepend_size = TLS_HEADER_SIZE + nonce_size;
--
2.36.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH net-next 3/6] tls: rx: wrap decrypt params in a struct
2022-07-07 1:35 [PATCH net-next 0/6] tls: pad strparser, internal header, decrypt_ctx etc Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 1/6] strparser: pad sk_skb_cb to avoid straddling cachelines Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 2/6] tls: rx: always allocate max possible aad size for decrypt Jakub Kicinski
@ 2022-07-07 1:35 ` Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 4/6] tls: rx: coalesce exit paths in tls_decrypt_sg() Jakub Kicinski
` (2 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Jakub Kicinski @ 2022-07-07 1:35 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi, tariqt,
Jakub Kicinski
The max size of iv + aad + tail is 22B. That's smaller
than a single sg entry (32B). Don't bother with the
memory packing, just create a struct which holds the
max size of those members.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
net/tls/tls_sw.c | 60 ++++++++++++++++++++++++------------------------
1 file changed, 30 insertions(+), 30 deletions(-)
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 4f6761dd8d86..5534962963c2 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -50,6 +50,13 @@ struct tls_decrypt_arg {
u8 tail;
};
+struct tls_decrypt_ctx {
+ u8 iv[MAX_IV_SIZE];
+ u8 aad[TLS_MAX_AAD_SIZE];
+ u8 tail;
+ struct scatterlist sg[];
+};
+
noinline void tls_err_abort(struct sock *sk, int err)
{
WARN_ON_ONCE(err >= 0);
@@ -1417,17 +1424,18 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
struct tls_context *tls_ctx = tls_get_ctx(sk);
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
struct tls_prot_info *prot = &tls_ctx->prot_info;
+ int n_sgin, n_sgout, aead_size, err, pages = 0;
struct strp_msg *rxm = strp_msg(skb);
struct tls_msg *tlm = tls_msg(skb);
- int n_sgin, n_sgout, nsg, mem_size, aead_size, err, pages = 0;
- u8 *aad, *iv, *tail, *mem = NULL;
struct aead_request *aead_req;
struct sk_buff *unused;
struct scatterlist *sgin = NULL;
struct scatterlist *sgout = NULL;
const int data_len = rxm->full_len - prot->overhead_size;
int tail_pages = !!prot->tail_size;
+ struct tls_decrypt_ctx *dctx;
int iv_offset = 0;
+ u8 *mem;
if (darg->zc && (out_iov || out_sg)) {
if (out_iov)
@@ -1449,38 +1457,30 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
/* Increment to accommodate AAD */
n_sgin = n_sgin + 1;
- nsg = n_sgin + n_sgout;
-
- aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv);
- mem_size = aead_size + (nsg * sizeof(struct scatterlist));
- mem_size = mem_size + TLS_MAX_AAD_SIZE;
- mem_size = mem_size + MAX_IV_SIZE;
- mem_size = mem_size + prot->tail_size;
-
/* Allocate a single block of memory which contains
- * aead_req || sgin[] || sgout[] || aad || iv || tail.
- * This order achieves correct alignment for aead_req, sgin, sgout.
+ * aead_req || tls_decrypt_ctx.
+ * Both structs are variable length.
*/
- mem = kmalloc(mem_size, sk->sk_allocation);
+ aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv);
+ mem = kmalloc(aead_size + struct_size(dctx, sg, n_sgin + n_sgout),
+ sk->sk_allocation);
if (!mem)
return -ENOMEM;
/* Segment the allocated memory */
aead_req = (struct aead_request *)mem;
- sgin = (struct scatterlist *)(mem + aead_size);
- sgout = sgin + n_sgin;
- aad = (u8 *)(sgout + n_sgout);
- iv = aad + TLS_MAX_AAD_SIZE;
- tail = iv + MAX_IV_SIZE;
+ dctx = (struct tls_decrypt_ctx *)(mem + aead_size);
+ sgin = &dctx->sg[0];
+ sgout = &dctx->sg[n_sgin];
/* For CCM based ciphers, first byte of nonce+iv is a constant */
switch (prot->cipher_type) {
case TLS_CIPHER_AES_CCM_128:
- iv[0] = TLS_AES_CCM_IV_B0_BYTE;
+ dctx->iv[0] = TLS_AES_CCM_IV_B0_BYTE;
iv_offset = 1;
break;
case TLS_CIPHER_SM4_CCM:
- iv[0] = TLS_SM4_CCM_IV_B0_BYTE;
+ dctx->iv[0] = TLS_SM4_CCM_IV_B0_BYTE;
iv_offset = 1;
break;
}
@@ -1488,28 +1488,28 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
/* Prepare IV */
if (prot->version == TLS_1_3_VERSION ||
prot->cipher_type == TLS_CIPHER_CHACHA20_POLY1305) {
- memcpy(iv + iv_offset, tls_ctx->rx.iv,
+ memcpy(&dctx->iv[iv_offset], tls_ctx->rx.iv,
prot->iv_size + prot->salt_size);
} else {
err = skb_copy_bits(skb, rxm->offset + TLS_HEADER_SIZE,
- iv + iv_offset + prot->salt_size,
+ &dctx->iv[iv_offset] + prot->salt_size,
prot->iv_size);
if (err < 0) {
kfree(mem);
return err;
}
- memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size);
+ memcpy(&dctx->iv[iv_offset], tls_ctx->rx.iv, prot->salt_size);
}
- xor_iv_with_seq(prot, iv + iv_offset, tls_ctx->rx.rec_seq);
+ xor_iv_with_seq(prot, &dctx->iv[iv_offset], tls_ctx->rx.rec_seq);
/* Prepare AAD */
- tls_make_aad(aad, rxm->full_len - prot->overhead_size +
+ tls_make_aad(dctx->aad, rxm->full_len - prot->overhead_size +
prot->tail_size,
tls_ctx->rx.rec_seq, tlm->control, prot);
/* Prepare sgin */
sg_init_table(sgin, n_sgin);
- sg_set_buf(&sgin[0], aad, prot->aad_size);
+ sg_set_buf(&sgin[0], dctx->aad, prot->aad_size);
err = skb_to_sgvec(skb, &sgin[1],
rxm->offset + prot->prepend_size,
rxm->full_len - prot->prepend_size);
@@ -1521,7 +1521,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
if (n_sgout) {
if (out_iov) {
sg_init_table(sgout, n_sgout);
- sg_set_buf(&sgout[0], aad, prot->aad_size);
+ sg_set_buf(&sgout[0], dctx->aad, prot->aad_size);
err = tls_setup_from_iter(out_iov, data_len,
&pages, &sgout[1],
@@ -1531,7 +1531,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
if (prot->tail_size) {
sg_unmark_end(&sgout[pages]);
- sg_set_buf(&sgout[pages + 1], tail,
+ sg_set_buf(&sgout[pages + 1], &dctx->tail,
prot->tail_size);
sg_mark_end(&sgout[pages + 1]);
}
@@ -1548,13 +1548,13 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
}
/* Prepare and submit AEAD request */
- err = tls_do_decryption(sk, skb, sgin, sgout, iv,
+ err = tls_do_decryption(sk, skb, sgin, sgout, dctx->iv,
data_len + prot->tail_size, aead_req, darg);
if (darg->async)
return 0;
if (prot->tail_size)
- darg->tail = *tail;
+ darg->tail = dctx->tail;
/* Release the pages in case iov was mapped to pages */
for (; pages > 0; pages--)
--
2.36.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH net-next 4/6] tls: rx: coalesce exit paths in tls_decrypt_sg()
2022-07-07 1:35 [PATCH net-next 0/6] tls: pad strparser, internal header, decrypt_ctx etc Jakub Kicinski
` (2 preceding siblings ...)
2022-07-07 1:35 ` [PATCH net-next 3/6] tls: rx: wrap decrypt params in a struct Jakub Kicinski
@ 2022-07-07 1:35 ` Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 5/6] tls: create an internal header Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 6/6] tls: rx: make tls_wait_data() return an recvmsg retcode Jakub Kicinski
5 siblings, 0 replies; 9+ messages in thread
From: Jakub Kicinski @ 2022-07-07 1:35 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi, tariqt,
Jakub Kicinski
Jump to the free() call, instead of having to remember
to free the memory in multiple places.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
net/tls/tls_sw.c | 14 +++++---------
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 5534962963c2..2afcf99105fb 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -1494,10 +1494,8 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
err = skb_copy_bits(skb, rxm->offset + TLS_HEADER_SIZE,
&dctx->iv[iv_offset] + prot->salt_size,
prot->iv_size);
- if (err < 0) {
- kfree(mem);
- return err;
- }
+ if (err < 0)
+ goto exit_free;
memcpy(&dctx->iv[iv_offset], tls_ctx->rx.iv, prot->salt_size);
}
xor_iv_with_seq(prot, &dctx->iv[iv_offset], tls_ctx->rx.rec_seq);
@@ -1513,10 +1511,8 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
err = skb_to_sgvec(skb, &sgin[1],
rxm->offset + prot->prepend_size,
rxm->full_len - prot->prepend_size);
- if (err < 0) {
- kfree(mem);
- return err;
- }
+ if (err < 0)
+ goto exit_free;
if (n_sgout) {
if (out_iov) {
@@ -1559,7 +1555,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
/* Release the pages in case iov was mapped to pages */
for (; pages > 0; pages--)
put_page(sg_page(&sgout[pages]));
-
+exit_free:
kfree(mem);
return err;
}
--
2.36.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH net-next 5/6] tls: create an internal header
2022-07-07 1:35 [PATCH net-next 0/6] tls: pad strparser, internal header, decrypt_ctx etc Jakub Kicinski
` (3 preceding siblings ...)
2022-07-07 1:35 ` [PATCH net-next 4/6] tls: rx: coalesce exit paths in tls_decrypt_sg() Jakub Kicinski
@ 2022-07-07 1:35 ` Jakub Kicinski
2022-07-07 16:21 ` kernel test robot
2022-07-07 16:54 ` kernel test robot
2022-07-07 1:35 ` [PATCH net-next 6/6] tls: rx: make tls_wait_data() return an recvmsg retcode Jakub Kicinski
5 siblings, 2 replies; 9+ messages in thread
From: Jakub Kicinski @ 2022-07-07 1:35 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi, tariqt,
Jakub Kicinski
include/net/tls.h is getting a little long, and is probably hard
for driver authors to navigate. Split out the internals into a
header which will live under net/tls/. While at it move some
static inlines with a single user into the source files, add
a few tls_ prefixes and fix spelling of 'proccess'.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
include/net/tls.h | 278 +-------------------------------
net/tls/tls.h | 291 ++++++++++++++++++++++++++++++++++
net/tls/tls_device.c | 3 +-
net/tls/tls_device_fallback.c | 2 +
net/tls/tls_main.c | 23 ++-
net/tls/tls_proc.c | 2 +
net/tls/tls_sw.c | 22 ++-
net/tls/tls_toe.c | 2 +
8 files changed, 339 insertions(+), 284 deletions(-)
create mode 100644 net/tls/tls.h
diff --git a/include/net/tls.h b/include/net/tls.h
index 9394c0459fe8..a5c6e3d2c4d6 100644
--- a/include/net/tls.h
+++ b/include/net/tls.h
@@ -39,7 +39,6 @@
#include <linux/crypto.h>
#include <linux/socket.h>
#include <linux/tcp.h>
-#include <linux/skmsg.h>
#include <linux/mutex.h>
#include <linux/netdevice.h>
#include <linux/rcupdate.h>
@@ -50,6 +49,7 @@
#include <crypto/aead.h>
#include <uapi/linux/tls.h>
+struct tls_rec;
/* Maximum data size carried in a TLS record */
#define TLS_MAX_PAYLOAD_SIZE ((size_t)1 << 14)
@@ -78,13 +78,6 @@
#define TLS_AES_CCM_IV_B0_BYTE 2
#define TLS_SM4_CCM_IV_B0_BYTE 2
-#define __TLS_INC_STATS(net, field) \
- __SNMP_INC_STATS((net)->mib.tls_statistics, field)
-#define TLS_INC_STATS(net, field) \
- SNMP_INC_STATS((net)->mib.tls_statistics, field)
-#define TLS_DEC_STATS(net, field) \
- SNMP_DEC_STATS((net)->mib.tls_statistics, field)
-
enum {
TLS_BASE,
TLS_SW,
@@ -93,32 +86,6 @@ enum {
TLS_NUM_CONFIG,
};
-/* TLS records are maintained in 'struct tls_rec'. It stores the memory pages
- * allocated or mapped for each TLS record. After encryption, the records are
- * stores in a linked list.
- */
-struct tls_rec {
- struct list_head list;
- int tx_ready;
- int tx_flags;
-
- struct sk_msg msg_plaintext;
- struct sk_msg msg_encrypted;
-
- /* AAD | msg_plaintext.sg.data | sg_tag */
- struct scatterlist sg_aead_in[2];
- /* AAD | msg_encrypted.sg.data (data contains overhead for hdr & iv & tag) */
- struct scatterlist sg_aead_out[2];
-
- char content_type;
- struct scatterlist sg_content_type;
-
- char aad_space[TLS_AAD_SPACE_SIZE];
- u8 iv_data[MAX_IV_SIZE];
- struct aead_request aead_req;
- u8 aead_req_ctx[];
-};
-
struct tx_work {
struct delayed_work work;
struct sock *sk;
@@ -349,44 +316,6 @@ struct tls_offload_context_rx {
#define TLS_OFFLOAD_CONTEXT_SIZE_RX \
(sizeof(struct tls_offload_context_rx) + TLS_DRIVER_STATE_SIZE_RX)
-struct tls_context *tls_ctx_create(struct sock *sk);
-void tls_ctx_free(struct sock *sk, struct tls_context *ctx);
-void update_sk_prot(struct sock *sk, struct tls_context *ctx);
-
-int wait_on_pending_writer(struct sock *sk, long *timeo);
-int tls_sk_query(struct sock *sk, int optname, char __user *optval,
- int __user *optlen);
-int tls_sk_attach(struct sock *sk, int optname, char __user *optval,
- unsigned int optlen);
-void tls_err_abort(struct sock *sk, int err);
-
-int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx);
-void tls_update_rx_zc_capable(struct tls_context *tls_ctx);
-void tls_sw_strparser_arm(struct sock *sk, struct tls_context *ctx);
-void tls_sw_strparser_done(struct tls_context *tls_ctx);
-int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
-int tls_sw_sendpage_locked(struct sock *sk, struct page *page,
- int offset, size_t size, int flags);
-int tls_sw_sendpage(struct sock *sk, struct page *page,
- int offset, size_t size, int flags);
-void tls_sw_cancel_work_tx(struct tls_context *tls_ctx);
-void tls_sw_release_resources_tx(struct sock *sk);
-void tls_sw_free_ctx_tx(struct tls_context *tls_ctx);
-void tls_sw_free_resources_rx(struct sock *sk);
-void tls_sw_release_resources_rx(struct sock *sk);
-void tls_sw_free_ctx_rx(struct tls_context *tls_ctx);
-int tls_sw_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
- int flags, int *addr_len);
-bool tls_sw_sock_is_readable(struct sock *sk);
-ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos,
- struct pipe_inode_info *pipe,
- size_t len, unsigned int flags);
-
-int tls_device_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
-int tls_device_sendpage(struct sock *sk, struct page *page,
- int offset, size_t size, int flags);
-int tls_tx_records(struct sock *sk, int flags);
-
struct tls_record_info *tls_get_record(struct tls_offload_context_tx *context,
u32 seq, u64 *p_record_sn);
@@ -400,58 +329,6 @@ static inline u32 tls_record_start_seq(struct tls_record_info *rec)
return rec->end_seq - rec->len;
}
-int tls_push_sg(struct sock *sk, struct tls_context *ctx,
- struct scatterlist *sg, u16 first_offset,
- int flags);
-int tls_push_partial_record(struct sock *sk, struct tls_context *ctx,
- int flags);
-void tls_free_partial_record(struct sock *sk, struct tls_context *ctx);
-
-static inline struct tls_msg *tls_msg(struct sk_buff *skb)
-{
- struct sk_skb_cb *scb = (struct sk_skb_cb *)skb->cb;
-
- return &scb->tls;
-}
-
-static inline bool tls_is_partially_sent_record(struct tls_context *ctx)
-{
- return !!ctx->partially_sent_record;
-}
-
-static inline bool tls_is_pending_open_record(struct tls_context *tls_ctx)
-{
- return tls_ctx->pending_open_record_frags;
-}
-
-static inline bool is_tx_ready(struct tls_sw_context_tx *ctx)
-{
- struct tls_rec *rec;
-
- rec = list_first_entry(&ctx->tx_list, struct tls_rec, list);
- if (!rec)
- return false;
-
- return READ_ONCE(rec->tx_ready);
-}
-
-static inline u16 tls_user_config(struct tls_context *ctx, bool tx)
-{
- u16 config = tx ? ctx->tx_conf : ctx->rx_conf;
-
- switch (config) {
- case TLS_BASE:
- return TLS_CONF_BASE;
- case TLS_SW:
- return TLS_CONF_SW;
- case TLS_HW:
- return TLS_CONF_HW;
- case TLS_HW_RECORD:
- return TLS_CONF_HW_RECORD;
- }
- return 0;
-}
-
struct sk_buff *
tls_validate_xmit_skb(struct sock *sk, struct net_device *dev,
struct sk_buff *skb);
@@ -470,31 +347,6 @@ static inline bool tls_is_sk_tx_device_offloaded(struct sock *sk)
#endif
}
-static inline bool tls_bigint_increment(unsigned char *seq, int len)
-{
- int i;
-
- for (i = len - 1; i >= 0; i--) {
- ++seq[i];
- if (seq[i] != 0)
- break;
- }
-
- return (i == -1);
-}
-
-static inline void tls_bigint_subtract(unsigned char *seq, int n)
-{
- u64 rcd_sn;
- __be64 *p;
-
- BUILD_BUG_ON(TLS_MAX_REC_SEQ_SIZE != 8);
-
- p = (__be64 *)seq;
- rcd_sn = be64_to_cpu(*p);
- *p = cpu_to_be64(rcd_sn - n);
-}
-
static inline struct tls_context *tls_get_ctx(const struct sock *sk)
{
struct inet_connection_sock *icsk = inet_csk(sk);
@@ -505,82 +357,6 @@ static inline struct tls_context *tls_get_ctx(const struct sock *sk)
return (__force void *)icsk->icsk_ulp_data;
}
-static inline void tls_advance_record_sn(struct sock *sk,
- struct tls_prot_info *prot,
- struct cipher_context *ctx)
-{
- if (tls_bigint_increment(ctx->rec_seq, prot->rec_seq_size))
- tls_err_abort(sk, -EBADMSG);
-
- if (prot->version != TLS_1_3_VERSION &&
- prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305)
- tls_bigint_increment(ctx->iv + prot->salt_size,
- prot->iv_size);
-}
-
-static inline void tls_fill_prepend(struct tls_context *ctx,
- char *buf,
- size_t plaintext_len,
- unsigned char record_type)
-{
- struct tls_prot_info *prot = &ctx->prot_info;
- size_t pkt_len, iv_size = prot->iv_size;
-
- pkt_len = plaintext_len + prot->tag_size;
- if (prot->version != TLS_1_3_VERSION &&
- prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305) {
- pkt_len += iv_size;
-
- memcpy(buf + TLS_NONCE_OFFSET,
- ctx->tx.iv + prot->salt_size, iv_size);
- }
-
- /* we cover nonce explicit here as well, so buf should be of
- * size KTLS_DTLS_HEADER_SIZE + KTLS_DTLS_NONCE_EXPLICIT_SIZE
- */
- buf[0] = prot->version == TLS_1_3_VERSION ?
- TLS_RECORD_TYPE_DATA : record_type;
- /* Note that VERSION must be TLS_1_2 for both TLS1.2 and TLS1.3 */
- buf[1] = TLS_1_2_VERSION_MINOR;
- buf[2] = TLS_1_2_VERSION_MAJOR;
- /* we can use IV for nonce explicit according to spec */
- buf[3] = pkt_len >> 8;
- buf[4] = pkt_len & 0xFF;
-}
-
-static inline void tls_make_aad(char *buf,
- size_t size,
- char *record_sequence,
- unsigned char record_type,
- struct tls_prot_info *prot)
-{
- if (prot->version != TLS_1_3_VERSION) {
- memcpy(buf, record_sequence, prot->rec_seq_size);
- buf += 8;
- } else {
- size += prot->tag_size;
- }
-
- buf[0] = prot->version == TLS_1_3_VERSION ?
- TLS_RECORD_TYPE_DATA : record_type;
- buf[1] = TLS_1_2_VERSION_MAJOR;
- buf[2] = TLS_1_2_VERSION_MINOR;
- buf[3] = size >> 8;
- buf[4] = size & 0xFF;
-}
-
-static inline void xor_iv_with_seq(struct tls_prot_info *prot, char *iv, char *seq)
-{
- int i;
-
- if (prot->version == TLS_1_3_VERSION ||
- prot->cipher_type == TLS_CIPHER_CHACHA20_POLY1305) {
- for (i = 0; i < 8; i++)
- iv[i + 4] ^= seq[i];
- }
-}
-
-
static inline struct tls_sw_context_rx *tls_sw_ctx_rx(
const struct tls_context *tls_ctx)
{
@@ -617,9 +393,6 @@ static inline bool tls_sw_has_ctx_rx(const struct sock *sk)
return !!tls_sw_ctx_rx(ctx);
}
-void tls_sw_write_space(struct sock *sk, struct tls_context *ctx);
-void tls_device_write_space(struct sock *sk, struct tls_context *ctx);
-
static inline struct tls_offload_context_rx *
tls_offload_ctx_rx(const struct tls_context *tls_ctx)
{
@@ -694,31 +467,10 @@ static inline bool tls_offload_tx_resync_pending(struct sock *sk)
return ret;
}
-int __net_init tls_proc_init(struct net *net);
-void __net_exit tls_proc_fini(struct net *net);
-
-int tls_proccess_cmsg(struct sock *sk, struct msghdr *msg,
- unsigned char *record_type);
-int decrypt_skb(struct sock *sk, struct sk_buff *skb,
- struct scatterlist *sgout);
struct sk_buff *tls_encrypt_skb(struct sk_buff *skb);
-int tls_sw_fallback_init(struct sock *sk,
- struct tls_offload_context_tx *offload_ctx,
- struct tls_crypto_info *crypto_info);
-
#ifdef CONFIG_TLS_DEVICE
-void tls_device_init(void);
-void tls_device_cleanup(void);
void tls_device_sk_destruct(struct sock *sk);
-int tls_set_device_offload(struct sock *sk, struct tls_context *ctx);
-void tls_device_free_resources_tx(struct sock *sk);
-int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx);
-void tls_device_offload_cleanup_rx(struct sock *sk);
-void tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq);
-void tls_offload_tx_resync_request(struct sock *sk, u32 got_seq, u32 exp_seq);
-int tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx,
- struct sk_buff *skb, struct strp_msg *rxm);
static inline bool tls_is_sk_rx_device_offloaded(struct sock *sk)
{
@@ -727,33 +479,5 @@ static inline bool tls_is_sk_rx_device_offloaded(struct sock *sk)
return false;
return tls_get_ctx(sk)->rx_conf == TLS_HW;
}
-#else
-static inline void tls_device_init(void) {}
-static inline void tls_device_cleanup(void) {}
-
-static inline int
-tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
-{
- return -EOPNOTSUPP;
-}
-
-static inline void tls_device_free_resources_tx(struct sock *sk) {}
-
-static inline int
-tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx)
-{
- return -EOPNOTSUPP;
-}
-
-static inline void tls_device_offload_cleanup_rx(struct sock *sk) {}
-static inline void
-tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq) {}
-
-static inline int
-tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx,
- struct sk_buff *skb, struct strp_msg *rxm)
-{
- return 0;
-}
#endif
#endif /* _TLS_OFFLOAD_H */
diff --git a/net/tls/tls.h b/net/tls/tls.h
new file mode 100644
index 000000000000..687f6635526f
--- /dev/null
+++ b/net/tls/tls.h
@@ -0,0 +1,291 @@
+/*
+ * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved.
+ * Copyright (c) 2016-2017, Dave Watson <davejwatson@fb.com>. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _TLS_INT_H
+#define _TLS_INT_H
+
+#include <asm/byteorder.h>
+#include <linux/types.h>
+#include <linux/skmsg.h>
+#include <net/tls.h>
+
+#define __TLS_INC_STATS(net, field) \
+ __SNMP_INC_STATS((net)->mib.tls_statistics, field)
+#define TLS_INC_STATS(net, field) \
+ SNMP_INC_STATS((net)->mib.tls_statistics, field)
+#define TLS_DEC_STATS(net, field) \
+ SNMP_DEC_STATS((net)->mib.tls_statistics, field)
+
+/* TLS records are maintained in 'struct tls_rec'. It stores the memory pages
+ * allocated or mapped for each TLS record. After encryption, the records are
+ * stores in a linked list.
+ */
+struct tls_rec {
+ struct list_head list;
+ int tx_ready;
+ int tx_flags;
+
+ struct sk_msg msg_plaintext;
+ struct sk_msg msg_encrypted;
+
+ /* AAD | msg_plaintext.sg.data | sg_tag */
+ struct scatterlist sg_aead_in[2];
+ /* AAD | msg_encrypted.sg.data (data contains overhead for hdr & iv & tag) */
+ struct scatterlist sg_aead_out[2];
+
+ char content_type;
+ struct scatterlist sg_content_type;
+
+ char aad_space[TLS_AAD_SPACE_SIZE];
+ u8 iv_data[MAX_IV_SIZE];
+ struct aead_request aead_req;
+ u8 aead_req_ctx[];
+};
+
+int __net_init tls_proc_init(struct net *net);
+void __net_exit tls_proc_fini(struct net *net);
+
+struct tls_context *tls_ctx_create(struct sock *sk);
+void tls_ctx_free(struct sock *sk, struct tls_context *ctx);
+void update_sk_prot(struct sock *sk, struct tls_context *ctx);
+
+int wait_on_pending_writer(struct sock *sk, long *timeo);
+int tls_sk_query(struct sock *sk, int optname, char __user *optval,
+ int __user *optlen);
+int tls_sk_attach(struct sock *sk, int optname, char __user *optval,
+ unsigned int optlen);
+void tls_err_abort(struct sock *sk, int err);
+
+int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx);
+void tls_update_rx_zc_capable(struct tls_context *tls_ctx);
+void tls_sw_strparser_arm(struct sock *sk, struct tls_context *ctx);
+void tls_sw_strparser_done(struct tls_context *tls_ctx);
+int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
+int tls_sw_sendpage_locked(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags);
+int tls_sw_sendpage(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags);
+void tls_sw_cancel_work_tx(struct tls_context *tls_ctx);
+void tls_sw_release_resources_tx(struct sock *sk);
+void tls_sw_free_ctx_tx(struct tls_context *tls_ctx);
+void tls_sw_free_resources_rx(struct sock *sk);
+void tls_sw_release_resources_rx(struct sock *sk);
+void tls_sw_free_ctx_rx(struct tls_context *tls_ctx);
+int tls_sw_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ int flags, int *addr_len);
+bool tls_sw_sock_is_readable(struct sock *sk);
+ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos,
+ struct pipe_inode_info *pipe,
+ size_t len, unsigned int flags);
+
+int tls_device_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
+int tls_device_sendpage(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags);
+int tls_tx_records(struct sock *sk, int flags);
+
+void tls_sw_write_space(struct sock *sk, struct tls_context *ctx);
+void tls_device_write_space(struct sock *sk, struct tls_context *ctx);
+
+int tls_process_cmsg(struct sock *sk, struct msghdr *msg,
+ unsigned char *record_type);
+int decrypt_skb(struct sock *sk, struct sk_buff *skb,
+ struct scatterlist *sgout);
+
+int tls_sw_fallback_init(struct sock *sk,
+ struct tls_offload_context_tx *offload_ctx,
+ struct tls_crypto_info *crypto_info);
+
+static inline struct tls_msg *tls_msg(struct sk_buff *skb)
+{
+ struct sk_skb_cb *scb = (struct sk_skb_cb *)skb->cb;
+
+ return &scb->tls;
+}
+
+#ifdef CONFIG_TLS_DEVICE
+void tls_device_init(void);
+void tls_device_cleanup(void);
+int tls_set_device_offload(struct sock *sk, struct tls_context *ctx);
+void tls_device_free_resources_tx(struct sock *sk);
+int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx);
+void tls_device_offload_cleanup_rx(struct sock *sk);
+void tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq);
+void tls_offload_tx_resync_request(struct sock *sk, u32 got_seq, u32 exp_seq);
+int tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx,
+ struct sk_buff *skb, struct strp_msg *rxm);
+#else
+static inline void tls_device_init(void) {}
+static inline void tls_device_cleanup(void) {}
+
+static inline int
+tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline void tls_device_free_resources_tx(struct sock *sk) {}
+
+static inline int
+tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline void tls_device_offload_cleanup_rx(struct sock *sk) {}
+static inline void
+tls_device_rx_resync_new_rec(struct sock *sk, u32 rcd_len, u32 seq) {}
+
+static inline int
+tls_device_decrypted(struct sock *sk, struct tls_context *tls_ctx,
+ struct sk_buff *skb, struct strp_msg *rxm)
+{
+ return 0;
+}
+#endif
+
+int tls_push_sg(struct sock *sk, struct tls_context *ctx,
+ struct scatterlist *sg, u16 first_offset,
+ int flags);
+int tls_push_partial_record(struct sock *sk, struct tls_context *ctx,
+ int flags);
+void tls_free_partial_record(struct sock *sk, struct tls_context *ctx);
+
+static inline bool tls_is_partially_sent_record(struct tls_context *ctx)
+{
+ return !!ctx->partially_sent_record;
+}
+
+static inline bool tls_is_pending_open_record(struct tls_context *tls_ctx)
+{
+ return tls_ctx->pending_open_record_frags;
+}
+
+static inline bool tls_bigint_increment(unsigned char *seq, int len)
+{
+ int i;
+
+ for (i = len - 1; i >= 0; i--) {
+ ++seq[i];
+ if (seq[i] != 0)
+ break;
+ }
+
+ return (i == -1);
+}
+
+static inline void tls_bigint_subtract(unsigned char *seq, int n)
+{
+ u64 rcd_sn;
+ __be64 *p;
+
+ BUILD_BUG_ON(TLS_MAX_REC_SEQ_SIZE != 8);
+
+ p = (__be64 *)seq;
+ rcd_sn = be64_to_cpu(*p);
+ *p = cpu_to_be64(rcd_sn - n);
+}
+
+static inline void
+tls_advance_record_sn(struct sock *sk, struct tls_prot_info *prot,
+ struct cipher_context *ctx)
+{
+ if (tls_bigint_increment(ctx->rec_seq, prot->rec_seq_size))
+ tls_err_abort(sk, -EBADMSG);
+
+ if (prot->version != TLS_1_3_VERSION &&
+ prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305)
+ tls_bigint_increment(ctx->iv + prot->salt_size,
+ prot->iv_size);
+}
+
+static inline void
+tls_xor_iv_with_seq(struct tls_prot_info *prot, char *iv, char *seq)
+{
+ int i;
+
+ if (prot->version == TLS_1_3_VERSION ||
+ prot->cipher_type == TLS_CIPHER_CHACHA20_POLY1305) {
+ for (i = 0; i < 8; i++)
+ iv[i + 4] ^= seq[i];
+ }
+}
+
+static inline void
+tls_fill_prepend(struct tls_context *ctx, char *buf, size_t plaintext_len,
+ unsigned char record_type)
+{
+ struct tls_prot_info *prot = &ctx->prot_info;
+ size_t pkt_len, iv_size = prot->iv_size;
+
+ pkt_len = plaintext_len + prot->tag_size;
+ if (prot->version != TLS_1_3_VERSION &&
+ prot->cipher_type != TLS_CIPHER_CHACHA20_POLY1305) {
+ pkt_len += iv_size;
+
+ memcpy(buf + TLS_NONCE_OFFSET,
+ ctx->tx.iv + prot->salt_size, iv_size);
+ }
+
+ /* we cover nonce explicit here as well, so buf should be of
+ * size KTLS_DTLS_HEADER_SIZE + KTLS_DTLS_NONCE_EXPLICIT_SIZE
+ */
+ buf[0] = prot->version == TLS_1_3_VERSION ?
+ TLS_RECORD_TYPE_DATA : record_type;
+ /* Note that VERSION must be TLS_1_2 for both TLS1.2 and TLS1.3 */
+ buf[1] = TLS_1_2_VERSION_MINOR;
+ buf[2] = TLS_1_2_VERSION_MAJOR;
+ /* we can use IV for nonce explicit according to spec */
+ buf[3] = pkt_len >> 8;
+ buf[4] = pkt_len & 0xFF;
+}
+
+static inline
+void tls_make_aad(char *buf, size_t size, char *record_sequence,
+ unsigned char record_type, struct tls_prot_info *prot)
+{
+ if (prot->version != TLS_1_3_VERSION) {
+ memcpy(buf, record_sequence, prot->rec_seq_size);
+ buf += 8;
+ } else {
+ size += prot->tag_size;
+ }
+
+ buf[0] = prot->version == TLS_1_3_VERSION ?
+ TLS_RECORD_TYPE_DATA : record_type;
+ buf[1] = TLS_1_2_VERSION_MAJOR;
+ buf[2] = TLS_1_2_VERSION_MINOR;
+ buf[3] = size >> 8;
+ buf[4] = size & 0xFF;
+}
+
+#endif
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index ec6f4b699a2b..227b92a3064a 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -38,6 +38,7 @@
#include <net/tcp.h>
#include <net/tls.h>
+#include "tls.h"
#include "trace.h"
/* device_offload_lock is used to synchronize tls_dev_add
@@ -562,7 +563,7 @@ int tls_device_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
lock_sock(sk);
if (unlikely(msg->msg_controllen)) {
- rc = tls_proccess_cmsg(sk, msg, &record_type);
+ rc = tls_process_cmsg(sk, msg, &record_type);
if (rc)
goto out;
}
diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
index 3bae29ae57ca..618cee704217 100644
--- a/net/tls/tls_device_fallback.c
+++ b/net/tls/tls_device_fallback.c
@@ -34,6 +34,8 @@
#include <crypto/scatterwalk.h>
#include <net/ip6_checksum.h>
+#include "tls.h"
+
static void chain_to_walk(struct scatterlist *sg, struct scatter_walk *walk)
{
struct scatterlist *src = walk->sg;
diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
index 1b3efc96db0b..f3d9dbfa507e 100644
--- a/net/tls/tls_main.c
+++ b/net/tls/tls_main.c
@@ -45,6 +45,8 @@
#include <net/tls.h>
#include <net/tls_toe.h>
+#include "tls.h"
+
MODULE_AUTHOR("Mellanox Technologies");
MODULE_DESCRIPTION("Transport Layer Security Support");
MODULE_LICENSE("Dual BSD/GPL");
@@ -164,8 +166,8 @@ static int tls_handle_open_record(struct sock *sk, int flags)
return 0;
}
-int tls_proccess_cmsg(struct sock *sk, struct msghdr *msg,
- unsigned char *record_type)
+int tls_process_cmsg(struct sock *sk, struct msghdr *msg,
+ unsigned char *record_type)
{
struct cmsghdr *cmsg;
int rc = -EINVAL;
@@ -1003,6 +1005,23 @@ static void tls_update(struct sock *sk, struct proto *p,
}
}
+static u16 tls_user_config(struct tls_context *ctx, bool tx)
+{
+ u16 config = tx ? ctx->tx_conf : ctx->rx_conf;
+
+ switch (config) {
+ case TLS_BASE:
+ return TLS_CONF_BASE;
+ case TLS_SW:
+ return TLS_CONF_SW;
+ case TLS_HW:
+ return TLS_CONF_HW;
+ case TLS_HW_RECORD:
+ return TLS_CONF_HW_RECORD;
+ }
+ return 0;
+}
+
static int tls_get_info(const struct sock *sk, struct sk_buff *skb)
{
u16 version, cipher_type;
diff --git a/net/tls/tls_proc.c b/net/tls/tls_proc.c
index 0c200000cc45..1246e52b48f6 100644
--- a/net/tls/tls_proc.c
+++ b/net/tls/tls_proc.c
@@ -6,6 +6,8 @@
#include <net/snmp.h>
#include <net/tls.h>
+#include "tls.h"
+
#ifdef CONFIG_PROC_FS
static const struct snmp_mib tls_mib_list[] = {
SNMP_MIB_ITEM("TlsCurrTxSw", LINUX_MIB_TLSCURRTXSW),
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 2afcf99105fb..337adab85037 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -44,6 +44,8 @@
#include <net/strparser.h>
#include <net/tls.h>
+#include "tls.h"
+
struct tls_decrypt_arg {
bool zc;
bool async;
@@ -527,7 +529,8 @@ static int tls_do_encryption(struct sock *sk,
memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv,
prot->iv_size + prot->salt_size);
- xor_iv_with_seq(prot, rec->iv_data + iv_offset, tls_ctx->tx.rec_seq);
+ tls_xor_iv_with_seq(prot, rec->iv_data + iv_offset,
+ tls_ctx->tx.rec_seq);
sge->offset += prot->prepend_size;
sge->length -= prot->prepend_size;
@@ -964,7 +967,7 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
lock_sock(sk);
if (unlikely(msg->msg_controllen)) {
- ret = tls_proccess_cmsg(sk, msg, &record_type);
+ ret = tls_process_cmsg(sk, msg, &record_type);
if (ret) {
if (ret == -EINPROGRESS)
num_async++;
@@ -1498,7 +1501,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
goto exit_free;
memcpy(&dctx->iv[iv_offset], tls_ctx->rx.iv, prot->salt_size);
}
- xor_iv_with_seq(prot, &dctx->iv[iv_offset], tls_ctx->rx.rec_seq);
+ tls_xor_iv_with_seq(prot, &dctx->iv[iv_offset], tls_ctx->rx.rec_seq);
/* Prepare AAD */
tls_make_aad(dctx->aad, rxm->full_len - prot->overhead_size +
@@ -2267,12 +2270,23 @@ static void tx_work_handler(struct work_struct *work)
mutex_unlock(&tls_ctx->tx_lock);
}
+static bool tls_is_tx_ready(struct tls_sw_context_tx *ctx)
+{
+ struct tls_rec *rec;
+
+ rec = list_first_entry(&ctx->tx_list, struct tls_rec, list);
+ if (!rec)
+ return false;
+
+ return READ_ONCE(rec->tx_ready);
+}
+
void tls_sw_write_space(struct sock *sk, struct tls_context *ctx)
{
struct tls_sw_context_tx *tx_ctx = tls_sw_ctx_tx(ctx);
/* Schedule the transmission if tx list is ready */
- if (is_tx_ready(tx_ctx) &&
+ if (tls_is_tx_ready(tx_ctx) &&
!test_and_set_bit(BIT_TX_SCHEDULED, &tx_ctx->tx_bitmask))
schedule_delayed_work(&tx_ctx->tx_work.work, 0);
}
diff --git a/net/tls/tls_toe.c b/net/tls/tls_toe.c
index 7e1330f19165..825669e1ab47 100644
--- a/net/tls/tls_toe.c
+++ b/net/tls/tls_toe.c
@@ -38,6 +38,8 @@
#include <net/tls.h>
#include <net/tls_toe.h>
+#include "tls.h"
+
static LIST_HEAD(device_list);
static DEFINE_SPINLOCK(device_spinlock);
--
2.36.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH net-next 6/6] tls: rx: make tls_wait_data() return an recvmsg retcode
2022-07-07 1:35 [PATCH net-next 0/6] tls: pad strparser, internal header, decrypt_ctx etc Jakub Kicinski
` (4 preceding siblings ...)
2022-07-07 1:35 ` [PATCH net-next 5/6] tls: create an internal header Jakub Kicinski
@ 2022-07-07 1:35 ` Jakub Kicinski
5 siblings, 0 replies; 9+ messages in thread
From: Jakub Kicinski @ 2022-07-07 1:35 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, borisp, john.fastabend, maximmi, tariqt,
Jakub Kicinski
tls_wait_data() sets the return code as an output parameter
and always returns ctx->recv_pkt on success.
Return the error code directly and let the caller read the skb
from the context. Use positive return code to indicate ctx->recv_pkt
is ready.
While touching the definition of the function rename it.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
net/tls/tls_sw.c | 53 ++++++++++++++++++++++++------------------------
1 file changed, 26 insertions(+), 27 deletions(-)
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 337adab85037..e659be0c1e9c 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -1305,54 +1305,50 @@ int tls_sw_sendpage(struct sock *sk, struct page *page,
return ret;
}
-static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock,
- bool nonblock, long timeo, int *err)
+static int
+tls_rx_rec_wait(struct sock *sk, struct sk_psock *psock, bool nonblock,
+ long timeo)
{
struct tls_context *tls_ctx = tls_get_ctx(sk);
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
- struct sk_buff *skb;
DEFINE_WAIT_FUNC(wait, woken_wake_function);
- while (!(skb = ctx->recv_pkt) && sk_psock_queue_empty(psock)) {
- if (sk->sk_err) {
- *err = sock_error(sk);
- return NULL;
- }
+ while (!ctx->recv_pkt) {
+ if (!sk_psock_queue_empty(psock))
+ return 0;
+
+ if (sk->sk_err)
+ return sock_error(sk);
if (!skb_queue_empty(&sk->sk_receive_queue)) {
__strp_unpause(&ctx->strp);
if (ctx->recv_pkt)
- return ctx->recv_pkt;
+ break;
}
if (sk->sk_shutdown & RCV_SHUTDOWN)
- return NULL;
+ return 0;
if (sock_flag(sk, SOCK_DONE))
- return NULL;
+ return 0;
- if (nonblock || !timeo) {
- *err = -EAGAIN;
- return NULL;
- }
+ if (nonblock || !timeo)
+ return -EAGAIN;
add_wait_queue(sk_sleep(sk), &wait);
sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
sk_wait_event(sk, &timeo,
- ctx->recv_pkt != skb ||
- !sk_psock_queue_empty(psock),
+ ctx->recv_pkt || !sk_psock_queue_empty(psock),
&wait);
sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
remove_wait_queue(sk_sleep(sk), &wait);
/* Handle signals */
- if (signal_pending(current)) {
- *err = sock_intr_errno(timeo);
- return NULL;
- }
+ if (signal_pending(current))
+ return sock_intr_errno(timeo);
}
- return skb;
+ return 1;
}
static int tls_setup_from_iter(struct iov_iter *from,
@@ -1812,8 +1808,8 @@ int tls_sw_recvmsg(struct sock *sk,
struct tls_decrypt_arg darg = {};
int to_decrypt, chunk;
- skb = tls_wait_data(sk, psock, flags & MSG_DONTWAIT, timeo, &err);
- if (!skb) {
+ err = tls_rx_rec_wait(sk, psock, flags & MSG_DONTWAIT, timeo);
+ if (err <= 0) {
if (psock) {
chunk = sk_msg_recvmsg(sk, psock, msg, len,
flags);
@@ -1823,6 +1819,7 @@ int tls_sw_recvmsg(struct sock *sk,
goto recv_end;
}
+ skb = ctx->recv_pkt;
rxm = strp_msg(skb);
tlm = tls_msg(skb);
@@ -1989,11 +1986,13 @@ ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos,
} else {
struct tls_decrypt_arg darg = {};
- skb = tls_wait_data(sk, NULL, flags & SPLICE_F_NONBLOCK, timeo,
- &err);
- if (!skb)
+ err = tls_rx_rec_wait(sk, NULL, flags & SPLICE_F_NONBLOCK,
+ timeo);
+ if (err <= 0)
goto splice_read_end;
+ skb = ctx->recv_pkt;
+
err = decrypt_skb_update(sk, skb, NULL, &darg);
if (err < 0) {
tls_err_abort(sk, -EBADMSG);
--
2.36.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH net-next 5/6] tls: create an internal header
2022-07-07 1:35 ` [PATCH net-next 5/6] tls: create an internal header Jakub Kicinski
@ 2022-07-07 16:21 ` kernel test robot
2022-07-07 16:54 ` kernel test robot
1 sibling, 0 replies; 9+ messages in thread
From: kernel test robot @ 2022-07-07 16:21 UTC (permalink / raw)
To: Jakub Kicinski, davem
Cc: llvm, kbuild-all, netdev, edumazet, pabeni, borisp,
john.fastabend, maximmi, tariqt, Jakub Kicinski
Hi Jakub,
I love your patch! Yet something to improve:
[auto build test ERROR on net-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Jakub-Kicinski/tls-pad-strparser-internal-header-decrypt_ctx-etc/20220707-120420
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git cd355d0bc60df51266d228c0f69570cdcfa1e6ba
config: i386-randconfig-a015 (https://download.01.org/0day-ci/archive/20220708/202207080051.XdhPoIde-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 66ae1d60bb278793fd651cece264699d522bab84)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/4088937ef16f0f7a85bc39bb89ab75b33d5e8774
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Jakub-Kicinski/tls-pad-strparser-internal-header-decrypt_ctx-etc/20220707-120420
git checkout 4088937ef16f0f7a85bc39bb89ab75b33d5e8774
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/net/ethernet/netronome/nfp/
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
>> drivers/net/ethernet/netronome/nfp/nfp_net_common.c:636:4: error: call to undeclared function 'tls_offload_tx_resync_request'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
tls_offload_tx_resync_request(nskb->sk, seq,
^
drivers/net/ethernet/netronome/nfp/nfp_net_common.c:636:4: note: did you mean 'tls_offload_rx_resync_request'?
include/net/tls.h:420:20: note: 'tls_offload_rx_resync_request' declared here
static inline void tls_offload_rx_resync_request(struct sock *sk, __be32 seq)
^
1 error generated.
vim +/tls_offload_tx_resync_request +636 drivers/net/ethernet/netronome/nfp/nfp_net_common.c
4c3523623dc0b98 Jakub Kicinski 2015-12-01 585
62d033309d62653 Jakub Kicinski 2022-03-21 586 struct sk_buff *
51a5e563298db5c Jakub Kicinski 2019-06-05 587 nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec,
51a5e563298db5c Jakub Kicinski 2019-06-05 588 struct sk_buff *skb, u64 *tls_handle, int *nr_frags)
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 589 {
c8d3928ea7e7e53 Jakub Kicinski 2019-07-08 590 #ifdef CONFIG_TLS_DEVICE
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 591 struct nfp_net_tls_offload_ctx *ntls;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 592 struct sk_buff *nskb;
9ed431c1d7cf8c3 Jakub Kicinski 2019-06-10 593 bool resync_pending;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 594 u32 datalen, seq;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 595
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 596 if (likely(!dp->ktls_tx))
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 597 return skb;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 598 if (!skb->sk || !tls_is_sk_tx_device_offloaded(skb->sk))
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 599 return skb;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 600
504148fedb85429 Eric Dumazet 2022-06-30 601 datalen = skb->len - skb_tcp_all_headers(skb);
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 602 seq = ntohl(tcp_hdr(skb)->seq);
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 603 ntls = tls_driver_ctx(skb->sk, TLS_OFFLOAD_CTX_DIR_TX);
9ed431c1d7cf8c3 Jakub Kicinski 2019-06-10 604 resync_pending = tls_offload_tx_resync_pending(skb->sk);
9ed431c1d7cf8c3 Jakub Kicinski 2019-06-10 605 if (unlikely(resync_pending || ntls->next_seq != seq)) {
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 606 /* Pure ACK out of order already */
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 607 if (!datalen)
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 608 return skb;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 609
51a5e563298db5c Jakub Kicinski 2019-06-05 610 u64_stats_update_begin(&r_vec->tx_sync);
51a5e563298db5c Jakub Kicinski 2019-06-05 611 r_vec->tls_tx_fallback++;
51a5e563298db5c Jakub Kicinski 2019-06-05 612 u64_stats_update_end(&r_vec->tx_sync);
51a5e563298db5c Jakub Kicinski 2019-06-05 613
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 614 nskb = tls_encrypt_skb(skb);
51a5e563298db5c Jakub Kicinski 2019-06-05 615 if (!nskb) {
51a5e563298db5c Jakub Kicinski 2019-06-05 616 u64_stats_update_begin(&r_vec->tx_sync);
51a5e563298db5c Jakub Kicinski 2019-06-05 617 r_vec->tls_tx_no_fallback++;
51a5e563298db5c Jakub Kicinski 2019-06-05 618 u64_stats_update_end(&r_vec->tx_sync);
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 619 return NULL;
51a5e563298db5c Jakub Kicinski 2019-06-05 620 }
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 621 /* encryption wasn't necessary */
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 622 if (nskb == skb)
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 623 return skb;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 624 /* we don't re-check ring space */
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 625 if (unlikely(skb_is_nonlinear(nskb))) {
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 626 nn_dp_warn(dp, "tls_encrypt_skb() produced fragmented frame\n");
51a5e563298db5c Jakub Kicinski 2019-06-05 627 u64_stats_update_begin(&r_vec->tx_sync);
51a5e563298db5c Jakub Kicinski 2019-06-05 628 r_vec->tx_errors++;
51a5e563298db5c Jakub Kicinski 2019-06-05 629 u64_stats_update_end(&r_vec->tx_sync);
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 630 dev_kfree_skb_any(nskb);
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 631 return NULL;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 632 }
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 633
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 634 /* jump forward, a TX may have gotten lost, need to sync TX */
9ed431c1d7cf8c3 Jakub Kicinski 2019-06-10 635 if (!resync_pending && seq - ntls->next_seq < U32_MAX / 4)
8538d29cea9530f Jakub Kicinski 2019-10-04 @636 tls_offload_tx_resync_request(nskb->sk, seq,
8538d29cea9530f Jakub Kicinski 2019-10-04 637 ntls->next_seq);
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 638
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 639 *nr_frags = 0;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 640 return nskb;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 641 }
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 642
51a5e563298db5c Jakub Kicinski 2019-06-05 643 if (datalen) {
51a5e563298db5c Jakub Kicinski 2019-06-05 644 u64_stats_update_begin(&r_vec->tx_sync);
427545b3046326c Jakub Kicinski 2019-07-08 645 if (!skb_is_gso(skb))
51a5e563298db5c Jakub Kicinski 2019-06-05 646 r_vec->hw_tls_tx++;
427545b3046326c Jakub Kicinski 2019-07-08 647 else
427545b3046326c Jakub Kicinski 2019-07-08 648 r_vec->hw_tls_tx += skb_shinfo(skb)->gso_segs;
51a5e563298db5c Jakub Kicinski 2019-06-05 649 u64_stats_update_end(&r_vec->tx_sync);
51a5e563298db5c Jakub Kicinski 2019-06-05 650 }
51a5e563298db5c Jakub Kicinski 2019-06-05 651
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 652 memcpy(tls_handle, ntls->fw_handle, sizeof(ntls->fw_handle));
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 653 ntls->next_seq += datalen;
c8d3928ea7e7e53 Jakub Kicinski 2019-07-08 654 #endif
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 655 return skb;
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 656 }
c3991d397f2a4d8 Dirk van der Merwe 2019-06-05 657
--
0-DAY CI Kernel Test Service
https://01.org/lkp
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net-next 5/6] tls: create an internal header
2022-07-07 1:35 ` [PATCH net-next 5/6] tls: create an internal header Jakub Kicinski
2022-07-07 16:21 ` kernel test robot
@ 2022-07-07 16:54 ` kernel test robot
1 sibling, 0 replies; 9+ messages in thread
From: kernel test robot @ 2022-07-07 16:54 UTC (permalink / raw)
To: Jakub Kicinski, davem
Cc: kbuild-all, netdev, edumazet, pabeni, borisp, john.fastabend,
maximmi, tariqt, Jakub Kicinski
Hi Jakub,
I love your patch! Yet something to improve:
[auto build test ERROR on net-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Jakub-Kicinski/tls-pad-strparser-internal-header-decrypt_ctx-etc/20220707-120420
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git cd355d0bc60df51266d228c0f69570cdcfa1e6ba
config: i386-allyesconfig (https://download.01.org/0day-ci/archive/20220708/202207080041.YiP2JbIW-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-3) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/4088937ef16f0f7a85bc39bb89ab75b33d5e8774
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Jakub-Kicinski/tls-pad-strparser-internal-header-decrypt_ctx-etc/20220707-120420
git checkout 4088937ef16f0f7a85bc39bb89ab75b33d5e8774
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
drivers/net/ethernet/fungible/funeth/funeth_tx.c: In function 'fun_tls_tx':
>> drivers/net/ethernet/fungible/funeth/funeth_tx.c:99:25: error: implicit declaration of function 'tls_offload_tx_resync_request'; did you mean 'tls_offload_rx_resync_request'? [-Werror=implicit-function-declaration]
99 | tls_offload_tx_resync_request(skb->sk, seq,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| tls_offload_rx_resync_request
cc1: some warnings being treated as errors
--
drivers/net/ethernet/netronome/nfp/nfp_net_common.c: In function 'nfp_net_tls_tx':
>> drivers/net/ethernet/netronome/nfp/nfp_net_common.c:636:25: error: implicit declaration of function 'tls_offload_tx_resync_request'; did you mean 'tls_offload_rx_resync_request'? [-Werror=implicit-function-declaration]
636 | tls_offload_tx_resync_request(nskb->sk, seq,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| tls_offload_rx_resync_request
cc1: some warnings being treated as errors
vim +99 drivers/net/ethernet/fungible/funeth/funeth_tx.c
db37bc177dae89c Dimitris Michailidis 2022-02-24 78
db37bc177dae89c Dimitris Michailidis 2022-02-24 79 static struct sk_buff *fun_tls_tx(struct sk_buff *skb, struct funeth_txq *q,
db37bc177dae89c Dimitris Michailidis 2022-02-24 80 unsigned int *tls_len)
db37bc177dae89c Dimitris Michailidis 2022-02-24 81 {
b23f9239195a1af Dimitris Michailidis 2022-03-08 82 #if IS_ENABLED(CONFIG_TLS_DEVICE)
db37bc177dae89c Dimitris Michailidis 2022-02-24 83 const struct fun_ktls_tx_ctx *tls_ctx;
db37bc177dae89c Dimitris Michailidis 2022-02-24 84 u32 datalen, seq;
db37bc177dae89c Dimitris Michailidis 2022-02-24 85
504148fedb85429 Eric Dumazet 2022-06-30 86 datalen = skb->len - skb_tcp_all_headers(skb);
db37bc177dae89c Dimitris Michailidis 2022-02-24 87 if (!datalen)
db37bc177dae89c Dimitris Michailidis 2022-02-24 88 return skb;
db37bc177dae89c Dimitris Michailidis 2022-02-24 89
db37bc177dae89c Dimitris Michailidis 2022-02-24 90 if (likely(!tls_offload_tx_resync_pending(skb->sk))) {
db37bc177dae89c Dimitris Michailidis 2022-02-24 91 seq = ntohl(tcp_hdr(skb)->seq);
db37bc177dae89c Dimitris Michailidis 2022-02-24 92 tls_ctx = tls_driver_ctx(skb->sk, TLS_OFFLOAD_CTX_DIR_TX);
db37bc177dae89c Dimitris Michailidis 2022-02-24 93
db37bc177dae89c Dimitris Michailidis 2022-02-24 94 if (likely(tls_ctx->next_seq == seq)) {
db37bc177dae89c Dimitris Michailidis 2022-02-24 95 *tls_len = datalen;
db37bc177dae89c Dimitris Michailidis 2022-02-24 96 return skb;
db37bc177dae89c Dimitris Michailidis 2022-02-24 97 }
db37bc177dae89c Dimitris Michailidis 2022-02-24 98 if (seq - tls_ctx->next_seq < U32_MAX / 4) {
db37bc177dae89c Dimitris Michailidis 2022-02-24 @99 tls_offload_tx_resync_request(skb->sk, seq,
db37bc177dae89c Dimitris Michailidis 2022-02-24 100 tls_ctx->next_seq);
db37bc177dae89c Dimitris Michailidis 2022-02-24 101 }
db37bc177dae89c Dimitris Michailidis 2022-02-24 102 }
db37bc177dae89c Dimitris Michailidis 2022-02-24 103
db37bc177dae89c Dimitris Michailidis 2022-02-24 104 FUN_QSTAT_INC(q, tx_tls_fallback);
db37bc177dae89c Dimitris Michailidis 2022-02-24 105 skb = tls_encrypt_skb(skb);
db37bc177dae89c Dimitris Michailidis 2022-02-24 106 if (!skb)
db37bc177dae89c Dimitris Michailidis 2022-02-24 107 FUN_QSTAT_INC(q, tx_tls_drops);
db37bc177dae89c Dimitris Michailidis 2022-02-24 108
db37bc177dae89c Dimitris Michailidis 2022-02-24 109 return skb;
b23f9239195a1af Dimitris Michailidis 2022-03-08 110 #else
b23f9239195a1af Dimitris Michailidis 2022-03-08 111 return NULL;
db37bc177dae89c Dimitris Michailidis 2022-02-24 112 #endif
b23f9239195a1af Dimitris Michailidis 2022-03-08 113 }
db37bc177dae89c Dimitris Michailidis 2022-02-24 114
--
0-DAY CI Kernel Test Service
https://01.org/lkp
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2022-07-07 16:55 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-07-07 1:35 [PATCH net-next 0/6] tls: pad strparser, internal header, decrypt_ctx etc Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 1/6] strparser: pad sk_skb_cb to avoid straddling cachelines Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 2/6] tls: rx: always allocate max possible aad size for decrypt Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 3/6] tls: rx: wrap decrypt params in a struct Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 4/6] tls: rx: coalesce exit paths in tls_decrypt_sg() Jakub Kicinski
2022-07-07 1:35 ` [PATCH net-next 5/6] tls: create an internal header Jakub Kicinski
2022-07-07 16:21 ` kernel test robot
2022-07-07 16:54 ` kernel test robot
2022-07-07 1:35 ` [PATCH net-next 6/6] tls: rx: make tls_wait_data() return an recvmsg retcode Jakub Kicinski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).