* [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC
@ 2025-05-02 13:19 Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 01/15] crypto: octeontx2: Share engine group info with AF driver Tanmay Jagdale
` (15 more replies)
0 siblings, 16 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
This patch series adds support for inbound inline IPsec flows for the
Marvell CN10K SoC.
The packet flow
---------------
An encrypted IPSec packet goes through two passes in the RVU hardware
before reaching the CPU.
First Pass:
The first pass involves identifying the packet as IPSec, assigning an RQ,
allocating a buffer from the Aura pool and then send it to CPT for decryption.
Second Pass:
After CPT decrypts the packet, it sends a metapacket to NIXRX via the X2P
bus. The metapacket contains CPT_PARSE_HDR_S structure and some initial
bytes of the decrypted packet which would help NIXRX in classification.
CPT also sets BIT(11) of channel number to further help in identifcation.
NIXRX allocates a new buffer for this packet and submits it to the CPU.
Once the decrypted metapacket packet is delivered to the CPU, get the WQE
pointer from CPT_PARSE_HDR_S in the packet buffer. This WQE points to the
complete decrypted packet. We create an skb using this, set the relevant
XFRM packet mode flags to indicate successful decryption, and submit it
to the network stack.
Patches are grouped as follows:
-------------------------------
1) CPT LF movement from crypto driver to RVU AF
0001-crypto-octeontx2-Share-engine-group-info-with-AF-dri.patch
0002-octeontx2-af-Configure-crypto-hardware-for-inline-ip.patch
0003-octeontx2-af-Setup-Large-Memory-Transaction-for-cryp.patch
0004-octeontx2-af-Handle-inbound-inline-ipsec-config-in-A.patch
0005-crypto-octeontx2-Remove-inbound-inline-ipsec-config.patch
2) RVU AF Mailbox changes for CPT 2nd pass RQ mask, SPI-to-SA table,
NIX-CPT BPID configuration
0006-octeontx2-af-Add-support-for-CPT-second-pass.patch
0007-octeontx2-af-Add-support-for-SPI-to-SA-index-transla.patch
0008-octeontx2-af-Add-mbox-to-alloc-free-BPIDs.patch
3) Inbound Inline IPsec support patches
0009-octeontx2-pf-ipsec-Allocate-Ingress-SA-table.patch
0010-octeontx2-pf-ipsec-Setup-NIX-HW-resources-for-inboun.patch
0011-octeontx2-pf-ipsec-Handle-NPA-threshhold-interrupt.patch
0012-octeontx2-pf-ipsec-Initialize-ingress-IPsec.patch
0013-octeontx2-pf-ipsec-Manage-NPC-rules-and-SPI-to-SA-ta.patch
0014-octeontx2-pf-ipsec-Process-CPT-metapackets.patch
0015-octeontx2-pf-ipsec-Add-XFRM-state-and-policy-hooks-f.patch
Bharat Bhushan (5):
crypto: octeontx2: Share engine group info with AF driver
octeontx2-af: Configure crypto hardware for inline ipsec
octeontx2-af: Setup Large Memory Transaction for crypto
octeontx2-af: Handle inbound inline ipsec config in AF
crypto: octeontx2: Remove inbound inline ipsec config
Geetha sowjanya (1):
octeontx2-af: Add mbox to alloc/free BPIDs
Kiran Kumar K (1):
octeontx2-af: Add support for SPI to SA index translation
Rakesh Kudurumalla (1):
octeontx2-af: Add support for CPT second pass
Tanmay Jagdale (7):
octeontx2-pf: ipsec: Allocate Ingress SA table
octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows
octeontx2-pf: ipsec: Handle NPA threshold interrupt
octeontx2-pf: ipsec: Initialize ingress IPsec
octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries
octeontx2-pf: ipsec: Process CPT metapackets
octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows
.../marvell/octeontx2/otx2_cpt_common.h | 8 -
drivers/crypto/marvell/octeontx2/otx2_cptpf.h | 10 -
.../marvell/octeontx2/otx2_cptpf_main.c | 50 +-
.../marvell/octeontx2/otx2_cptpf_mbox.c | 286 +---
.../marvell/octeontx2/otx2_cptpf_ucode.c | 116 +-
.../marvell/octeontx2/otx2_cptpf_ucode.h | 3 +-
.../ethernet/marvell/octeontx2/af/Makefile | 2 +-
.../ethernet/marvell/octeontx2/af/common.h | 1 +
.../net/ethernet/marvell/octeontx2/af/mbox.h | 119 +-
.../net/ethernet/marvell/octeontx2/af/rvu.c | 9 +-
.../net/ethernet/marvell/octeontx2/af/rvu.h | 71 +
.../ethernet/marvell/octeontx2/af/rvu_cn10k.c | 11 +
.../ethernet/marvell/octeontx2/af/rvu_cpt.c | 706 +++++++++-
.../ethernet/marvell/octeontx2/af/rvu_cpt.h | 71 +
.../ethernet/marvell/octeontx2/af/rvu_nix.c | 230 +++-
.../marvell/octeontx2/af/rvu_nix_spi.c | 220 +++
.../ethernet/marvell/octeontx2/af/rvu_reg.h | 16 +
.../marvell/octeontx2/af/rvu_struct.h | 4 +-
.../marvell/octeontx2/nic/cn10k_ipsec.c | 1191 ++++++++++++++++-
.../marvell/octeontx2/nic/cn10k_ipsec.h | 152 +++
.../marvell/octeontx2/nic/otx2_common.c | 23 +-
.../marvell/octeontx2/nic/otx2_common.h | 16 +
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 17 +
.../marvell/octeontx2/nic/otx2_struct.h | 16 +
.../marvell/octeontx2/nic/otx2_txrx.c | 25 +-
.../ethernet/marvell/octeontx2/nic/otx2_vf.c | 4 +
26 files changed, 2915 insertions(+), 462 deletions(-)
create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
--
2.43.0
^ permalink raw reply [flat|nested] 43+ messages in thread
* [net-next PATCH v1 01/15] crypto: octeontx2: Share engine group info with AF driver
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 02/15] octeontx2-af: Configure crypto hardware for inline ipsec Tanmay Jagdale
` (14 subsequent siblings)
15 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
From: Bharat Bhushan <bbhushan2@marvell.com>
CPT crypto hardware have multiple engines of different type
and these engines of a give type are attached to one of the
engine group. Software will submit ecnap/decap work to these
engine group. Engine group details are available with CPT
crypto driver. This is shared with AF driver using mailbox
message to enable use cases like inline-ipsec etc.
Also, no need to try to delete engine groups if engine group
initialization fails. Engine groups will never be created
before engine group initialization.
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../marvell/octeontx2/otx2_cpt_common.h | 7 --
.../marvell/octeontx2/otx2_cptpf_main.c | 4 +-
.../marvell/octeontx2/otx2_cptpf_mbox.c | 1 +
.../marvell/octeontx2/otx2_cptpf_ucode.c | 116 ++++++++++++++++--
.../marvell/octeontx2/otx2_cptpf_ucode.h | 3 +-
.../net/ethernet/marvell/octeontx2/af/mbox.h | 16 +++
.../net/ethernet/marvell/octeontx2/af/rvu.h | 10 ++
.../ethernet/marvell/octeontx2/af/rvu_cpt.c | 21 ++++
8 files changed, 160 insertions(+), 18 deletions(-)
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
index c5b7c57574ef..df735eab8f08 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
+++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
@@ -32,13 +32,6 @@
#define BAD_OTX2_CPT_ENG_TYPE OTX2_CPT_MAX_ENG_TYPES
-enum otx2_cpt_eng_type {
- OTX2_CPT_AE_TYPES = 1,
- OTX2_CPT_SE_TYPES = 2,
- OTX2_CPT_IE_TYPES = 3,
- OTX2_CPT_MAX_ENG_TYPES,
-};
-
/* Take mbox id from end of CPT mbox range in AF (range 0xA00 - 0xBFF) */
#define MBOX_MSG_RX_INLINE_IPSEC_LF_CFG 0xBFE
#define MBOX_MSG_GET_ENG_GRP_NUM 0xBFF
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
index 12971300296d..8a7ed0152371 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
@@ -813,7 +813,7 @@ static int otx2_cptpf_probe(struct pci_dev *pdev,
sysfs_grp_del:
sysfs_remove_group(&dev->kobj, &cptpf_sysfs_group);
cleanup_eng_grps:
- otx2_cpt_cleanup_eng_grps(pdev, &cptpf->eng_grps);
+ otx2_cpt_cleanup_eng_grps(cptpf);
unregister_intr:
cptpf_disable_afpf_mbox_intr(cptpf);
destroy_afpf_mbox:
@@ -843,7 +843,7 @@ static void otx2_cptpf_remove(struct pci_dev *pdev)
/* Delete sysfs entry created for kernel VF limits */
sysfs_remove_group(&pdev->dev.kobj, &cptpf_sysfs_group);
/* Cleanup engine groups */
- otx2_cpt_cleanup_eng_grps(pdev, &cptpf->eng_grps);
+ otx2_cpt_cleanup_eng_grps(cptpf);
/* Disable AF-PF mailbox interrupt */
cptpf_disable_afpf_mbox_intr(cptpf);
/* Destroy AF-PF mbox */
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
index ec1ac7e836a3..5e6f70ac35a7 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
@@ -507,6 +507,7 @@ static void process_afpf_mbox_msg(struct otx2_cptpf_dev *cptpf,
case MBOX_MSG_CPT_INLINE_IPSEC_CFG:
case MBOX_MSG_NIX_INLINE_IPSEC_CFG:
case MBOX_MSG_CPT_LF_RESET:
+ case MBOX_MSG_CPT_SET_ENG_GRP_NUM:
break;
default:
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
index 42c5484ce66a..17081aed173f 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
@@ -1142,6 +1142,68 @@ int otx2_cpt_get_eng_grp(struct otx2_cpt_eng_grps *eng_grps, int eng_type)
return eng_grp_num;
}
+static int otx2_cpt_get_eng_grp_type(struct otx2_cpt_eng_grps *eng_grps,
+ int grp_num)
+{
+ struct otx2_cpt_eng_grp_info *grp;
+
+ grp = &eng_grps->grp[grp_num];
+ if (!grp->is_enabled)
+ return 0;
+
+ if (eng_grp_has_eng_type(grp, OTX2_CPT_SE_TYPES) &&
+ !eng_grp_has_eng_type(grp, OTX2_CPT_IE_TYPES))
+ return OTX2_CPT_SE_TYPES;
+
+ if (eng_grp_has_eng_type(grp, OTX2_CPT_IE_TYPES))
+ return OTX2_CPT_IE_TYPES;
+
+ if (eng_grp_has_eng_type(grp, OTX2_CPT_AE_TYPES))
+ return OTX2_CPT_AE_TYPES;
+ return 0;
+}
+
+static int otx2_cpt_set_eng_grp_num(struct otx2_cptpf_dev *cptpf,
+ enum otx2_cpt_eng_type eng_type, bool set)
+{
+ struct cpt_set_egrp_num *req;
+ struct pci_dev *pdev = cptpf->pdev;
+
+ if (!eng_type || eng_type >= OTX2_CPT_MAX_ENG_TYPES)
+ return -EINVAL;
+
+ req = (struct cpt_set_egrp_num *)
+ otx2_mbox_alloc_msg_rsp(&cptpf->afpf_mbox, 0,
+ sizeof(*req), sizeof(struct msg_rsp));
+ if (!req) {
+ dev_err(&pdev->dev, "RVU MBOX failed to get message.\n");
+ return -EFAULT;
+ }
+
+ memset(req, 0, sizeof(*req));
+ req->hdr.id = MBOX_MSG_CPT_SET_ENG_GRP_NUM;
+ req->hdr.sig = OTX2_MBOX_REQ_SIG;
+ req->hdr.pcifunc = OTX2_CPT_RVU_PFFUNC(cptpf->pf_id, 0);
+ req->set = set;
+ req->eng_type = eng_type;
+ req->eng_grp_num = otx2_cpt_get_eng_grp(&cptpf->eng_grps, eng_type);
+
+ return otx2_cpt_send_mbox_msg(&cptpf->afpf_mbox, pdev);
+}
+
+static int otx2_cpt_set_eng_grp_nums(struct otx2_cptpf_dev *cptpf, bool set)
+{
+ enum otx2_cpt_eng_type type;
+ int ret;
+
+ for (type = OTX2_CPT_AE_TYPES; type < OTX2_CPT_MAX_ENG_TYPES; type++) {
+ ret = otx2_cpt_set_eng_grp_num(cptpf, type, set);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf,
struct otx2_cpt_eng_grps *eng_grps)
{
@@ -1222,6 +1284,10 @@ int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf,
if (ret)
goto delete_eng_grp;
+ ret = otx2_cpt_set_eng_grp_nums(cptpf, 1);
+ if (ret)
+ goto unset_eng_grp;
+
eng_grps->is_grps_created = true;
cpt_ucode_release_fw(&fw_info);
@@ -1269,6 +1335,8 @@ int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf,
mutex_unlock(&eng_grps->lock);
return 0;
+unset_eng_grp:
+ otx2_cpt_set_eng_grp_nums(cptpf, 0);
delete_eng_grp:
delete_engine_grps(pdev, eng_grps);
release_fw:
@@ -1348,9 +1416,10 @@ int otx2_cpt_disable_all_cores(struct otx2_cptpf_dev *cptpf)
return cptx_disable_all_cores(cptpf, total_cores, BLKADDR_CPT0);
}
-void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev,
- struct otx2_cpt_eng_grps *eng_grps)
+void otx2_cpt_cleanup_eng_grps(struct otx2_cptpf_dev *cptpf)
{
+ struct otx2_cpt_eng_grps *eng_grps = &cptpf->eng_grps;
+ struct pci_dev *pdev = cptpf->pdev;
struct otx2_cpt_eng_grp_info *grp;
int i, j;
@@ -1364,6 +1433,8 @@ void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev,
grp->engs[j].bmap = NULL;
}
}
+
+ otx2_cpt_set_eng_grp_nums(cptpf, 0);
mutex_unlock(&eng_grps->lock);
}
@@ -1386,8 +1457,7 @@ int otx2_cpt_init_eng_grps(struct pci_dev *pdev,
dev_err(&pdev->dev,
"Number of engines %d > than max supported %d\n",
eng_grps->engs_num, OTX2_CPT_MAX_ENGINES);
- ret = -EINVAL;
- goto cleanup_eng_grps;
+ return -EINVAL;
}
for (i = 0; i < OTX2_CPT_MAX_ENGINE_GROUPS; i++) {
@@ -1401,14 +1471,20 @@ int otx2_cpt_init_eng_grps(struct pci_dev *pdev,
sizeof(long), GFP_KERNEL);
if (!grp->engs[j].bmap) {
ret = -ENOMEM;
- goto cleanup_eng_grps;
+ goto release_bmap;
}
}
}
return 0;
-cleanup_eng_grps:
- otx2_cpt_cleanup_eng_grps(pdev, eng_grps);
+release_bmap:
+ for (i = 0; i < OTX2_CPT_MAX_ENGINE_GROUPS; i++) {
+ grp = &eng_grps->grp[i];
+ for (j = 0; j < OTX2_CPT_MAX_ETYPES_PER_GRP; j++) {
+ kfree(grp->engs[j].bmap);
+ grp->engs[j].bmap = NULL;
+ }
+ }
return ret;
}
@@ -1590,6 +1666,7 @@ int otx2_cpt_dl_custom_egrp_create(struct otx2_cptpf_dev *cptpf,
bool has_se, has_ie, has_ae;
struct fw_info_t fw_info;
int ucode_idx = 0;
+ int egrp;
if (!eng_grps->is_grps_created) {
dev_err(dev, "Not allowed before creating the default groups\n");
@@ -1727,7 +1804,21 @@ int otx2_cpt_dl_custom_egrp_create(struct otx2_cptpf_dev *cptpf,
}
ret = create_engine_group(dev, eng_grps, engs, grp_idx,
(void **)uc_info, 1);
+ if (ret)
+ goto release_fw;
+ ret = otx2_cpt_set_eng_grp_num(cptpf, engs[0].type, 1);
+ if (ret) {
+ egrp = otx2_cpt_get_eng_grp(eng_grps, engs[0].type);
+ ret = delete_engine_group(dev, &eng_grps->grp[egrp]);
+ }
+ if (ucode_idx > 1) {
+ ret = otx2_cpt_set_eng_grp_num(cptpf, engs[1].type, 1);
+ if (ret) {
+ egrp = otx2_cpt_get_eng_grp(eng_grps, engs[1].type);
+ ret = delete_engine_group(dev, &eng_grps->grp[egrp]);
+ }
+ }
release_fw:
cpt_ucode_release_fw(&fw_info);
err_unlock:
@@ -1745,6 +1836,7 @@ int otx2_cpt_dl_custom_egrp_delete(struct otx2_cptpf_dev *cptpf,
struct device *dev = &cptpf->pdev->dev;
char *tmp, *err_msg;
int egrp;
+ int type;
int ret;
err_msg = "Invalid input string format(ex: egrp:0)";
@@ -1766,6 +1858,16 @@ int otx2_cpt_dl_custom_egrp_delete(struct otx2_cptpf_dev *cptpf,
return -EINVAL;
}
mutex_lock(&eng_grps->lock);
+ type = otx2_cpt_get_eng_grp_type(eng_grps, egrp);
+ if (!type) {
+ mutex_unlock(&eng_grps->lock);
+ return -EINVAL;
+ }
+ ret = otx2_cpt_set_eng_grp_num(cptpf, type, 0);
+ if (ret) {
+ mutex_unlock(&eng_grps->lock);
+ return -EINVAL;
+ }
ret = delete_engine_group(dev, &eng_grps->grp[egrp]);
mutex_unlock(&eng_grps->lock);
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h
index 7e6a6a4ec37c..85ead693e359 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h
@@ -155,8 +155,7 @@ struct otx2_cpt_eng_grps {
struct otx2_cptpf_dev;
int otx2_cpt_init_eng_grps(struct pci_dev *pdev,
struct otx2_cpt_eng_grps *eng_grps);
-void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev,
- struct otx2_cpt_eng_grps *eng_grps);
+void otx2_cpt_cleanup_eng_grps(struct otx2_cptpf_dev *cptpf);
int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf,
struct otx2_cpt_eng_grps *eng_grps);
int otx2_cpt_disable_all_cores(struct otx2_cptpf_dev *cptpf);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 005ca8a056c0..973ff5cf1a7d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -211,6 +211,8 @@ M(CPT_CTX_CACHE_SYNC, 0xA07, cpt_ctx_cache_sync, msg_req, msg_rsp) \
M(CPT_LF_RESET, 0xA08, cpt_lf_reset, cpt_lf_rst_req, msg_rsp) \
M(CPT_FLT_ENG_INFO, 0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req, \
cpt_flt_eng_info_rsp) \
+M(CPT_SET_ENG_GRP_NUM, 0xA0A, cpt_set_eng_grp_num, cpt_set_egrp_num, \
+ msg_rsp) \
/* SDP mbox IDs (range 0x1000 - 0x11FF) */ \
M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \
M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \
@@ -1941,6 +1943,20 @@ struct cpt_flt_eng_info_rsp {
u64 rsvd;
};
+enum otx2_cpt_eng_type {
+ OTX2_CPT_AE_TYPES = 1,
+ OTX2_CPT_SE_TYPES = 2,
+ OTX2_CPT_IE_TYPES = 3,
+ OTX2_CPT_MAX_ENG_TYPES,
+};
+
+struct cpt_set_egrp_num {
+ struct mbox_msghdr hdr;
+ bool set;
+ u8 eng_type;
+ u8 eng_grp_num;
+};
+
struct sdp_node_info {
/* Node to which this PF belons to */
u8 node_id;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 147d7f5c1fcc..fa403da555ff 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -520,6 +520,15 @@ struct rep_evtq_ent {
struct rep_event event;
};
+struct rvu_cpt_eng_grp {
+ u8 eng_type;
+ u8 grp_num;
+};
+
+struct rvu_cpt {
+ struct rvu_cpt_eng_grp eng_grp[OTX2_CPT_MAX_ENG_TYPES];
+};
+
struct rvu {
void __iomem *afreg_base;
void __iomem *pfreg_base;
@@ -600,6 +609,7 @@ struct rvu {
spinlock_t mcs_intrq_lock;
/* CPT interrupt lock */
spinlock_t cpt_intr_lock;
+ struct rvu_cpt rvu_cpt;
struct mutex mbox_lock; /* Serialize mbox up and down msgs */
u16 rep_pcifunc;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
index 3c5bbaf12e59..e720ae03133d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
@@ -656,6 +656,27 @@ static int cpt_inline_ipsec_cfg_outbound(struct rvu *rvu, int blkaddr, u8 cptlf,
return 0;
}
+int rvu_mbox_handler_cpt_set_eng_grp_num(struct rvu *rvu,
+ struct cpt_set_egrp_num *req,
+ struct msg_rsp *rsp)
+{
+ struct rvu_cpt *rvu_cpt = &rvu->rvu_cpt;
+ u8 eng_type = req->eng_type;
+
+ if (!eng_type || eng_type >= OTX2_CPT_MAX_ENG_TYPES)
+ return -EINVAL;
+
+ if (req->set) {
+ rvu_cpt->eng_grp[eng_type].grp_num = req->eng_grp_num;
+ rvu_cpt->eng_grp[eng_type].eng_type = eng_type;
+ } else {
+ rvu_cpt->eng_grp[eng_type].grp_num = 0;
+ rvu_cpt->eng_grp[eng_type].eng_type = 0;
+ }
+
+ return 0;
+}
+
int rvu_mbox_handler_cpt_inline_ipsec_cfg(struct rvu *rvu,
struct cpt_inline_ipsec_cfg_msg *req,
struct msg_rsp *rsp)
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 02/15] octeontx2-af: Configure crypto hardware for inline ipsec
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 01/15] crypto: octeontx2: Share engine group info with AF driver Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-06 20:24 ` Simon Horman
2025-05-02 13:19 ` [net-next PATCH v1 03/15] octeontx2-af: Setup Large Memory Transaction for crypto Tanmay Jagdale
` (13 subsequent siblings)
15 siblings, 1 reply; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
From: Bharat Bhushan <bbhushan2@marvell.com>
Currently cpt_rx_inline_lf_cfg mailbox is handled by CPT PF
driver to configures inbound inline ipsec. Ideally inbound
inline ipsec configuration should be done by AF driver.
This patch adds support to allocate, attach and initialize
a cptlf from AF. It also configures NIX to send CPT instruction
if the packet needs inline ipsec processing and configures
CPT LF to handle inline inbound instruction received from NIX.
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../net/ethernet/marvell/octeontx2/af/mbox.h | 14 +
.../net/ethernet/marvell/octeontx2/af/rvu.c | 4 +-
.../net/ethernet/marvell/octeontx2/af/rvu.h | 34 ++
.../ethernet/marvell/octeontx2/af/rvu_cpt.c | 563 ++++++++++++++++++
.../ethernet/marvell/octeontx2/af/rvu_cpt.h | 67 +++
.../ethernet/marvell/octeontx2/af/rvu_nix.c | 4 +-
.../ethernet/marvell/octeontx2/af/rvu_reg.h | 5 +
7 files changed, 687 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 973ff5cf1a7d..8540a04a92f9 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -1950,6 +1950,20 @@ enum otx2_cpt_eng_type {
OTX2_CPT_MAX_ENG_TYPES,
};
+struct cpt_rx_inline_lf_cfg_msg {
+ struct mbox_msghdr hdr;
+ u16 sso_pf_func;
+ u16 param1;
+ u16 param2;
+ u16 opcode;
+ u32 credit;
+ u32 credit_th;
+ u16 bpid;
+ u32 reserved;
+ u8 ctx_ilen_valid : 1;
+ u8 ctx_ilen : 7;
+};
+
struct cpt_set_egrp_num {
struct mbox_msghdr hdr;
bool set;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 6575c422635b..d9f000cda5e5 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -1775,8 +1775,8 @@ int rvu_mbox_handler_attach_resources(struct rvu *rvu,
return err;
}
-static u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
- int blkaddr, int lf)
+u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+ int blkaddr, int lf)
{
u16 vec;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index fa403da555ff..6923fd756b19 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -525,8 +525,38 @@ struct rvu_cpt_eng_grp {
u8 grp_num;
};
+struct rvu_cpt_rx_inline_lf_cfg {
+ u16 sso_pf_func;
+ u16 param1;
+ u16 param2;
+ u16 opcode;
+ u32 credit;
+ u32 credit_th;
+ u16 bpid;
+ u32 reserved;
+ u8 ctx_ilen_valid : 1;
+ u8 ctx_ilen : 7;
+};
+
+struct rvu_cpt_inst_queue {
+ u8 *vaddr;
+ u8 *real_vaddr;
+ dma_addr_t dma_addr;
+ dma_addr_t real_dma_addr;
+ u32 size;
+};
+
struct rvu_cpt {
struct rvu_cpt_eng_grp eng_grp[OTX2_CPT_MAX_ENG_TYPES];
+
+ /* RX inline ipsec lock */
+ struct mutex lock;
+ bool rx_initialized;
+ u16 msix_offset;
+ u8 inline_ipsec_egrp;
+ struct rvu_cpt_inst_queue cpt0_iq;
+ struct rvu_cpt_inst_queue cpt1_iq;
+ struct rvu_cpt_rx_inline_lf_cfg rx_cfg;
};
struct rvu {
@@ -1066,6 +1096,8 @@ void rvu_program_channels(struct rvu *rvu);
/* CN10K NIX */
void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw);
+void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req,
+ int blkaddr);
/* CN10K RVU - LMT*/
void rvu_reset_lmt_map_tbl(struct rvu *rvu, u16 pcifunc);
@@ -1097,6 +1129,8 @@ int rvu_mcs_init(struct rvu *rvu);
int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc);
void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena);
void rvu_mcs_exit(struct rvu *rvu);
+u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+ int blkaddr, int lf);
/* Representor APIs */
int rvu_rep_pf_init(struct rvu *rvu);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
index e720ae03133d..89e0739ba414 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
@@ -11,6 +11,7 @@
#include "rvu_reg.h"
#include "mbox.h"
#include "rvu.h"
+#include "rvu_cpt.h"
/* CPT PF device id */
#define PCI_DEVID_OTX2_CPT_PF 0xA0FD
@@ -968,6 +969,33 @@ int rvu_mbox_handler_cpt_ctx_cache_sync(struct rvu *rvu, struct msg_req *req,
return rvu_cpt_ctx_flush(rvu, req->hdr.pcifunc);
}
+static int cpt_rx_ipsec_lf_reset(struct rvu *rvu, int blkaddr, int slot)
+{
+ struct rvu_block *block;
+ u16 pcifunc = 0;
+ int cptlf, ret;
+ u64 ctl, ctl2;
+
+ block = &rvu->hw->block[blkaddr];
+
+ cptlf = rvu_get_lf(rvu, block, pcifunc, slot);
+ if (cptlf < 0)
+ return CPT_AF_ERR_LF_INVALID;
+
+ ctl = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf));
+ ctl2 = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf));
+
+ ret = rvu_lf_reset(rvu, block, cptlf);
+ if (ret)
+ dev_err(rvu->dev, "Failed to reset blkaddr %d LF%d\n",
+ block->addr, cptlf);
+
+ rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), ctl);
+ rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf), ctl2);
+
+ return 0;
+}
+
int rvu_mbox_handler_cpt_lf_reset(struct rvu *rvu, struct cpt_lf_rst_req *req,
struct msg_rsp *rsp)
{
@@ -1087,6 +1115,72 @@ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr)
#define DQPTR GENMASK_ULL(19, 0)
#define NQPTR GENMASK_ULL(51, 32)
+static void cpt_rx_ipsec_lf_enable_iqueue(struct rvu *rvu, int blkaddr,
+ int slot)
+{
+ u64 val;
+
+ /* Set Execution Enable of instruction queue */
+ val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG);
+ val |= BIT_ULL(16);
+ otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG, val);
+
+ /* Set iqueue's enqueuing */
+ val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL);
+ val |= BIT_ULL(0);
+ otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL, val);
+}
+
+static void cpt_rx_ipsec_lf_disable_iqueue(struct rvu *rvu, int blkaddr,
+ int slot)
+{
+ int timeout = 1000000;
+ u64 inprog, inst_ptr;
+ u64 qsize, pending;
+ int i = 0;
+
+ /* Disable instructions enqueuing */
+ otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL, 0x0);
+
+ inprog = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG);
+ inprog |= BIT_ULL(16);
+ otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG, inprog);
+
+ qsize = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_SIZE)
+ & 0x7FFF;
+ do {
+ inst_ptr = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot,
+ CPT_LF_Q_INST_PTR);
+ pending = (FIELD_GET(XQ_XOR, inst_ptr) * qsize * 40) +
+ FIELD_GET(NQPTR, inst_ptr) -
+ FIELD_GET(DQPTR, inst_ptr);
+ udelay(1);
+ timeout--;
+ } while ((pending != 0) && (timeout != 0));
+
+ if (timeout == 0)
+ dev_warn(rvu->dev, "TIMEOUT: CPT poll on pending instructions\n");
+
+ timeout = 1000000;
+ /* Wait for CPT queue to become execution-quiescent */
+ do {
+ inprog = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot,
+ CPT_LF_INPROG);
+ if ((FIELD_GET(INFLIGHT, inprog) == 0) &&
+ (FIELD_GET(GRB_CNT, inprog) == 0)) {
+ i++;
+ } else {
+ i = 0;
+ timeout--;
+ }
+ } while ((timeout != 0) && (i < 10));
+
+ if (timeout == 0)
+ dev_warn(rvu->dev, "TIMEOUT: CPT poll on inflight count\n");
+ /* Wait for 2 us to flush all queue writes to memory */
+ udelay(2);
+}
+
static void cpt_lf_disable_iqueue(struct rvu *rvu, int blkaddr, int slot)
{
int timeout = 1000000;
@@ -1310,6 +1404,474 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc)
return 0;
}
+static irqreturn_t rvu_cpt_rx_ipsec_misc_intr_handler(int irq, void *ptr)
+{
+ struct rvu_block *block = ptr;
+ struct rvu *rvu = block->rvu;
+ int blkaddr = block->addr;
+ struct device *dev = rvu->dev;
+ int slot = 0;
+ u64 val;
+
+ val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_MISC_INT);
+
+ if (val & (1 << 6)) {
+ dev_err(dev, "Memory error detected while executing CPT_INST_S, LF %d.\n",
+ slot);
+ } else if (val & (1 << 5)) {
+ dev_err(dev, "HW error from an engine executing CPT_INST_S, LF %d.",
+ slot);
+ } else if (val & (1 << 3)) {
+ dev_err(dev, "SMMU fault while writing CPT_RES_S to CPT_INST_S[RES_ADDR], LF %d.\n",
+ slot);
+ } else if (val & (1 << 2)) {
+ dev_err(dev, "Memory error when accessing instruction memory queue CPT_LF_Q_BASE[ADDR].\n");
+ } else if (val & (1 << 1)) {
+ dev_err(dev, "Error enqueuing an instruction received at CPT_LF_NQ.\n");
+ } else {
+ dev_err(dev, "Unhandled interrupt in CPT LF %d\n", slot);
+ return IRQ_NONE;
+ }
+
+ /* Acknowledge interrupts */
+ otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_MISC_INT,
+ val & CPT_LF_MISC_INT_MASK);
+
+ return IRQ_HANDLED;
+}
+
+static int rvu_cpt_rx_inline_setup_irq(struct rvu *rvu, int blkaddr, int slot)
+{
+ struct rvu_hwinfo *hw = rvu->hw;
+ struct rvu_block *block;
+ struct rvu_pfvf *pfvf;
+ u16 msix_offset;
+ int pcifunc = 0;
+ int ret, cptlf;
+
+ pfvf = rvu_get_pfvf(rvu, pcifunc);
+ if (!pfvf->msix.bmap)
+ return -ENODEV;
+
+ block = &hw->block[blkaddr];
+ cptlf = rvu_get_lf(rvu, block, pcifunc, slot);
+ if (cptlf < 0)
+ return CPT_AF_ERR_LF_INVALID;
+
+ msix_offset = rvu_get_msix_offset(rvu, pfvf, blkaddr, cptlf);
+ if (msix_offset == MSIX_VECTOR_INVALID)
+ return -ENODEV;
+
+ ret = rvu_cpt_do_register_interrupt(block, msix_offset,
+ rvu_cpt_rx_ipsec_misc_intr_handler,
+ "CPTLF RX IPSEC MISC");
+ if (ret)
+ return ret;
+
+ /* Enable All Misc interrupts */
+ otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot,
+ CPT_LF_MISC_INT_ENA_W1S, CPT_LF_MISC_INT_MASK);
+
+ rvu->rvu_cpt.msix_offset = msix_offset;
+ return 0;
+}
+
+static void rvu_cpt_rx_inline_cleanup_irq(struct rvu *rvu, int blkaddr,
+ int slot)
+{
+ struct rvu_hwinfo *hw = rvu->hw;
+ struct rvu_block *block;
+
+ /* Disable All Misc interrupts */
+ otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot,
+ CPT_LF_MISC_INT_ENA_W1C, CPT_LF_MISC_INT_MASK);
+
+ block = &hw->block[blkaddr];
+ free_irq(pci_irq_vector(rvu->pdev, rvu->rvu_cpt.msix_offset), block);
+}
+
+static int rvu_rx_attach_cptlf(struct rvu *rvu, int blkaddr)
+{
+ struct rsrc_attach attach;
+
+ memset(&attach, 0, sizeof(struct rsrc_attach));
+ attach.hdr.id = MBOX_MSG_ATTACH_RESOURCES;
+ attach.hdr.sig = OTX2_MBOX_REQ_SIG;
+ attach.hdr.ver = OTX2_MBOX_VERSION;
+ attach.hdr.pcifunc = 0;
+ attach.modify = 1;
+ attach.cptlfs = 1;
+ attach.cpt_blkaddr = blkaddr;
+
+ return rvu_mbox_handler_attach_resources(rvu, &attach, NULL);
+}
+
+static int rvu_rx_detach_cptlf(struct rvu *rvu)
+{
+ struct rsrc_detach detach;
+
+ memset(&detach, 0, sizeof(struct rsrc_detach));
+ detach.hdr.id = MBOX_MSG_ATTACH_RESOURCES;
+ detach.hdr.sig = OTX2_MBOX_REQ_SIG;
+ detach.hdr.ver = OTX2_MBOX_VERSION;
+ detach.hdr.pcifunc = 0;
+ detach.partial = 1;
+ detach.cptlfs = 1;
+
+ return rvu_mbox_handler_detach_resources(rvu, &detach, NULL);
+}
+
+/* Allocate memory for CPT outbound Instruction queue.
+ * Instruction queue memory format is:
+ * -----------------------------
+ * | Instruction Group memory |
+ * | (CPT_LF_Q_SIZE[SIZE_DIV40] |
+ * | x 16 Bytes) |
+ * | |
+ * ----------------------------- <-- CPT_LF_Q_BASE[ADDR]
+ * | Flow Control (128 Bytes) |
+ * | |
+ * -----------------------------
+ * | Instruction Memory |
+ * | (CPT_LF_Q_SIZE[SIZE_DIV40] |
+ * | × 40 × 64 bytes) |
+ * | |
+ * -----------------------------
+ */
+static int rvu_rx_cpt_iq_alloc(struct rvu *rvu, struct rvu_cpt_inst_queue *iq)
+{
+ iq->size = RVU_CPT_INST_QLEN_BYTES + RVU_CPT_Q_FC_LEN +
+ RVU_CPT_INST_GRP_QLEN_BYTES + OTX2_ALIGN;
+
+ iq->real_vaddr = dma_alloc_coherent(rvu->dev, iq->size,
+ &iq->real_dma_addr, GFP_KERNEL);
+ if (!iq->real_vaddr)
+ return -ENOMEM;
+
+ /* iq->vaddr/dma_addr points to Flow Control location */
+ iq->vaddr = iq->real_vaddr + RVU_CPT_INST_GRP_QLEN_BYTES;
+ iq->dma_addr = iq->real_dma_addr + RVU_CPT_INST_GRP_QLEN_BYTES;
+
+ /* Align pointers */
+ iq->vaddr = PTR_ALIGN(iq->vaddr, OTX2_ALIGN);
+ iq->dma_addr = PTR_ALIGN(iq->dma_addr, OTX2_ALIGN);
+ return 0;
+}
+
+static void rvu_rx_cpt_iq_free(struct rvu *rvu, int blkaddr)
+{
+ struct rvu_cpt_inst_queue *iq;
+
+ if (blkaddr == BLKADDR_CPT0)
+ iq = &rvu->rvu_cpt.cpt0_iq;
+ else
+ iq = &rvu->rvu_cpt.cpt1_iq;
+
+ if (!iq->real_vaddr)
+ dma_free_coherent(rvu->dev, iq->size, iq->real_vaddr,
+ iq->real_dma_addr);
+
+ iq->real_vaddr = NULL;
+ iq->vaddr = NULL;
+}
+
+static int rvu_rx_cpt_set_grp_pri_ilen(struct rvu *rvu, int blkaddr, int cptlf)
+{
+ u64 reg_val;
+
+ reg_val = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf));
+ /* Set High priority */
+ reg_val |= 1;
+ /* Set engine group */
+ reg_val |= ((1ULL << rvu->rvu_cpt.inline_ipsec_egrp) << 48);
+ /* Set ilen if valid */
+ if (rvu->rvu_cpt.rx_cfg.ctx_ilen_valid)
+ reg_val |= rvu->rvu_cpt.rx_cfg.ctx_ilen << 17;
+
+ rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), reg_val);
+ return 0;
+}
+
+static int rvu_cpt_rx_inline_cptlf_init(struct rvu *rvu, int blkaddr, int slot)
+{
+ struct rvu_cpt_inst_queue *iq;
+ struct rvu_block *block;
+ int pcifunc = 0;
+ int cptlf;
+ int err;
+ u64 val;
+
+ /* Attach cptlf with AF for inline inbound ipsec */
+ err = rvu_rx_attach_cptlf(rvu, blkaddr);
+ if (err)
+ return err;
+
+ block = &rvu->hw->block[blkaddr];
+ cptlf = rvu_get_lf(rvu, block, pcifunc, slot);
+ if (cptlf < 0) {
+ err = CPT_AF_ERR_LF_INVALID;
+ goto detach_cptlf;
+ }
+
+ if (blkaddr == BLKADDR_CPT0)
+ iq = &rvu->rvu_cpt.cpt0_iq;
+ else
+ iq = &rvu->rvu_cpt.cpt1_iq;
+
+ /* Allocate CPT instruction queue */
+ err = rvu_rx_cpt_iq_alloc(rvu, iq);
+ if (err)
+ goto detach_cptlf;
+
+ /* reset CPT LF */
+ cpt_rx_ipsec_lf_reset(rvu, blkaddr, slot);
+
+ /* Disable IQ */
+ cpt_rx_ipsec_lf_disable_iqueue(rvu, blkaddr, slot);
+
+ /* Set IQ base address */
+ otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_BASE,
+ iq->dma_addr);
+ /* Set IQ size */
+ val = FIELD_PREP(CPT_LF_Q_SIZE_DIV40, RVU_CPT_SIZE_DIV40 +
+ RVU_CPT_EXTRA_SIZE_DIV40);
+ otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_SIZE, val);
+
+ /* Enable IQ */
+ cpt_rx_ipsec_lf_enable_iqueue(rvu, blkaddr, slot);
+
+ /* Set High priority */
+ rvu_rx_cpt_set_grp_pri_ilen(rvu, blkaddr, cptlf);
+
+ return 0;
+detach_cptlf:
+ rvu_rx_detach_cptlf(rvu);
+ return err;
+}
+
+static void rvu_cpt_rx_inline_cptlf_clean(struct rvu *rvu, int blkaddr,
+ int slot)
+{
+ /* Disable IQ */
+ cpt_rx_ipsec_lf_disable_iqueue(rvu, blkaddr, slot);
+
+ /* Free Instruction Queue */
+ rvu_rx_cpt_iq_free(rvu, blkaddr);
+
+ /* Detach CPTLF */
+ rvu_rx_detach_cptlf(rvu);
+}
+
+static void rvu_cpt_save_rx_inline_lf_cfg(struct rvu *rvu,
+ struct cpt_rx_inline_lf_cfg_msg *req)
+{
+ rvu->rvu_cpt.rx_cfg.sso_pf_func = req->sso_pf_func;
+ rvu->rvu_cpt.rx_cfg.param1 = req->param1;
+ rvu->rvu_cpt.rx_cfg.param2 = req->param2;
+ rvu->rvu_cpt.rx_cfg.opcode = req->opcode;
+ rvu->rvu_cpt.rx_cfg.credit = req->credit;
+ rvu->rvu_cpt.rx_cfg.credit_th = req->credit_th;
+ rvu->rvu_cpt.rx_cfg.bpid = req->bpid;
+ rvu->rvu_cpt.rx_cfg.ctx_ilen_valid = req->ctx_ilen_valid;
+ rvu->rvu_cpt.rx_cfg.ctx_ilen = req->ctx_ilen;
+}
+
+static void
+rvu_show_diff_cpt_rx_inline_lf_cfg(struct rvu *rvu,
+ struct cpt_rx_inline_lf_cfg_msg *req)
+{
+ struct device *dev = rvu->dev;
+
+ if (rvu->rvu_cpt.rx_cfg.sso_pf_func != req->sso_pf_func)
+ dev_info(dev, "Mismatch RX inline config sso_pf_func Req %x Prog %x\n",
+ req->sso_pf_func, rvu->rvu_cpt.rx_cfg.sso_pf_func);
+ if (rvu->rvu_cpt.rx_cfg.param1 != req->param1)
+ dev_info(dev, "Mismatch RX inline config param1 Req %x Prog %x\n",
+ req->param1, rvu->rvu_cpt.rx_cfg.param1);
+ if (rvu->rvu_cpt.rx_cfg.param2 != req->param2)
+ dev_info(dev, "Mismatch RX inline config param2 Req %x Prog %x\n",
+ req->param2, rvu->rvu_cpt.rx_cfg.param2);
+ if (rvu->rvu_cpt.rx_cfg.opcode != req->opcode)
+ dev_info(dev, "Mismatch RX inline config opcode Req %x Prog %x\n",
+ req->opcode, rvu->rvu_cpt.rx_cfg.opcode);
+ if (rvu->rvu_cpt.rx_cfg.credit != req->credit)
+ dev_info(dev, "Mismatch RX inline config credit Req %x Prog %x\n",
+ req->credit, rvu->rvu_cpt.rx_cfg.credit);
+ if (rvu->rvu_cpt.rx_cfg.credit_th != req->credit_th)
+ dev_info(dev, "Mismatch RX inline config credit_th Req %x Prog %x\n",
+ req->credit_th, rvu->rvu_cpt.rx_cfg.credit_th);
+ if (rvu->rvu_cpt.rx_cfg.bpid != req->bpid)
+ dev_info(dev, "Mismatch RX inline config bpid Req %x Prog %x\n",
+ req->bpid, rvu->rvu_cpt.rx_cfg.bpid);
+ if (rvu->rvu_cpt.rx_cfg.ctx_ilen != req->ctx_ilen)
+ dev_info(dev, "Mismatch RX inline config ctx_ilen Req %x Prog %x\n",
+ req->ctx_ilen, rvu->rvu_cpt.rx_cfg.ctx_ilen);
+ if (rvu->rvu_cpt.rx_cfg.ctx_ilen_valid != req->ctx_ilen_valid)
+ dev_info(dev, "Mismatch RX inline config ctx_ilen_valid Req %x Prog %x\n",
+ req->ctx_ilen_valid,
+ rvu->rvu_cpt.rx_cfg.ctx_ilen_valid);
+}
+
+static void rvu_cpt_rx_inline_nix_cfg(struct rvu *rvu)
+{
+ struct nix_inline_ipsec_cfg nix_cfg;
+
+ nix_cfg.enable = 1;
+ nix_cfg.credit_th = rvu->rvu_cpt.rx_cfg.credit_th;
+ nix_cfg.bpid = rvu->rvu_cpt.rx_cfg.bpid;
+ if (!rvu->rvu_cpt.rx_cfg.credit || rvu->rvu_cpt.rx_cfg.credit >
+ RVU_CPT_INST_QLEN_MSGS)
+ nix_cfg.cpt_credit = RVU_CPT_INST_QLEN_MSGS - 1;
+ else
+ nix_cfg.cpt_credit = rvu->rvu_cpt.rx_cfg.credit - 1;
+
+ nix_cfg.gen_cfg.egrp = rvu->rvu_cpt.inline_ipsec_egrp;
+ if (rvu->rvu_cpt.rx_cfg.opcode) {
+ nix_cfg.gen_cfg.opcode = rvu->rvu_cpt.rx_cfg.opcode;
+ } else {
+ if (is_rvu_otx2(rvu))
+ nix_cfg.gen_cfg.opcode = OTX2_CPT_INLINE_RX_OPCODE;
+ else
+ nix_cfg.gen_cfg.opcode = CN10K_CPT_INLINE_RX_OPCODE;
+ }
+
+ nix_cfg.gen_cfg.param1 = rvu->rvu_cpt.rx_cfg.param1;
+ nix_cfg.gen_cfg.param2 = rvu->rvu_cpt.rx_cfg.param2;
+ nix_cfg.inst_qsel.cpt_pf_func = rvu_get_pf(0);
+ nix_cfg.inst_qsel.cpt_slot = 0;
+
+ nix_inline_ipsec_cfg(rvu, &nix_cfg, BLKADDR_NIX0);
+
+ if (is_block_implemented(rvu->hw, BLKADDR_CPT1))
+ nix_inline_ipsec_cfg(rvu, &nix_cfg, BLKADDR_NIX1);
+}
+
+static int rvu_cpt_rx_inline_ipsec_cfg(struct rvu *rvu)
+{
+ struct rvu_block *block;
+ struct cpt_inline_ipsec_cfg_msg req;
+ u16 pcifunc = 0;
+ int cptlf;
+ int err;
+
+ memset(&req, 0, sizeof(struct cpt_inline_ipsec_cfg_msg));
+ req.sso_pf_func_ovrd = 0; // Add sysfs interface to set this
+ req.sso_pf_func = rvu->rvu_cpt.rx_cfg.sso_pf_func;
+ req.enable = 1;
+
+ block = &rvu->hw->block[BLKADDR_CPT0];
+ cptlf = rvu_get_lf(rvu, block, pcifunc, 0);
+ if (cptlf < 0)
+ return CPT_AF_ERR_LF_INVALID;
+
+ err = cpt_inline_ipsec_cfg_inbound(rvu, BLKADDR_CPT0, cptlf, &req);
+ if (err)
+ return err;
+
+ if (!is_block_implemented(rvu->hw, BLKADDR_CPT1))
+ return 0;
+
+ block = &rvu->hw->block[BLKADDR_CPT1];
+ cptlf = rvu_get_lf(rvu, block, pcifunc, 0);
+ if (cptlf < 0)
+ return CPT_AF_ERR_LF_INVALID;
+
+ return cpt_inline_ipsec_cfg_inbound(rvu, BLKADDR_CPT1, cptlf, &req);
+}
+
+static int rvu_cpt_rx_inline_cptlf_setup(struct rvu *rvu, int blkaddr, int slot)
+{
+ int err;
+
+ err = rvu_cpt_rx_inline_cptlf_init(rvu, blkaddr, slot);
+ if (err) {
+ dev_err(rvu->dev,
+ "CPTLF configuration failed for RX inline ipsec\n");
+ return err;
+ }
+
+ err = rvu_cpt_rx_inline_setup_irq(rvu, blkaddr, slot);
+ if (err) {
+ dev_err(rvu->dev,
+ "CPTLF Interrupt setup failed for RX inline ipsec\n");
+ rvu_cpt_rx_inline_cptlf_clean(rvu, blkaddr, slot);
+ return err;
+ }
+ return 0;
+}
+
+static void rvu_rx_cptlf_cleanup(struct rvu *rvu, int blkaddr, int slot)
+{
+ /* IRQ cleanup */
+ rvu_cpt_rx_inline_cleanup_irq(rvu, blkaddr, slot);
+
+ /* CPTLF cleanup */
+ rvu_cpt_rx_inline_cptlf_clean(rvu, blkaddr, slot);
+}
+
+int rvu_mbox_handler_cpt_rx_inline_lf_cfg(struct rvu *rvu,
+ struct cpt_rx_inline_lf_cfg_msg *req,
+ struct msg_rsp *rsp)
+{
+ u8 egrp = OTX2_CPT_INVALID_CRYPTO_ENG_GRP;
+ int err;
+ int i;
+
+ mutex_lock(&rvu->rvu_cpt.lock);
+ if (rvu->rvu_cpt.rx_initialized) {
+ dev_info(rvu->dev, "Inline RX CPT already initialized\n");
+ rvu_show_diff_cpt_rx_inline_lf_cfg(rvu, req);
+ err = 0;
+ goto unlock;
+ }
+
+ /* Get Inline Ipsec Engine Group */
+ for (i = 0; i < OTX2_CPT_MAX_ENG_TYPES; i++) {
+ if (rvu->rvu_cpt.eng_grp[i].eng_type == OTX2_CPT_IE_TYPES) {
+ egrp = rvu->rvu_cpt.eng_grp[i].grp_num;
+ break;
+ }
+ }
+
+ if (egrp == OTX2_CPT_INVALID_CRYPTO_ENG_GRP) {
+ dev_err(rvu->dev,
+ "Engine group for inline ipsec not available\n");
+ err = -ENODEV;
+ goto unlock;
+ }
+ rvu->rvu_cpt.inline_ipsec_egrp = egrp;
+
+ rvu_cpt_save_rx_inline_lf_cfg(rvu, req);
+
+ err = rvu_cpt_rx_inline_cptlf_setup(rvu, BLKADDR_CPT0, 0);
+ if (err)
+ goto unlock;
+
+ if (is_block_implemented(rvu->hw, BLKADDR_CPT1)) {
+ err = rvu_cpt_rx_inline_cptlf_setup(rvu, BLKADDR_CPT1, 0);
+ if (err)
+ goto cptlf_cleanup;
+ }
+
+ rvu_cpt_rx_inline_nix_cfg(rvu);
+
+ err = rvu_cpt_rx_inline_ipsec_cfg(rvu);
+ if (err)
+ goto cptlf1_cleanup;
+
+ rvu->rvu_cpt.rx_initialized = true;
+ mutex_unlock(&rvu->rvu_cpt.lock);
+ return 0;
+
+cptlf1_cleanup:
+ rvu_rx_cptlf_cleanup(rvu, BLKADDR_CPT1, 0);
+cptlf_cleanup:
+ rvu_rx_cptlf_cleanup(rvu, BLKADDR_CPT0, 0);
+unlock:
+ mutex_unlock(&rvu->rvu_cpt.lock);
+ return err;
+}
+
#define MAX_RXC_ICB_CNT GENMASK_ULL(40, 32)
int rvu_cpt_init(struct rvu *rvu)
@@ -1336,5 +1898,6 @@ int rvu_cpt_init(struct rvu *rvu)
spin_lock_init(&rvu->cpt_intr_lock);
+ mutex_init(&rvu->rvu_cpt.lock);
return 0;
}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
new file mode 100644
index 000000000000..4b57c7038d6c
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Marvell AF CPT driver
+ *
+ * Copyright (C) 2022 Marvell.
+ */
+
+#ifndef RVU_CPT_H
+#define RVU_CPT_H
+
+#include <linux/types.h>
+
+/* CPT instruction size in bytes */
+#define RVU_CPT_INST_SIZE 64
+
+/* CPT instruction (CPT_INST_S) queue length */
+#define RVU_CPT_INST_QLEN 8200
+
+/* CPT instruction queue size passed to HW is in units of
+ * 40*CPT_INST_S messages.
+ */
+#define RVU_CPT_SIZE_DIV40 (RVU_CPT_INST_QLEN / 40)
+
+/* CPT instruction and pending queues length in CPT_INST_S messages */
+#define RVU_CPT_INST_QLEN_MSGS ((RVU_CPT_SIZE_DIV40 - 1) * 40)
+
+/* CPT needs 320 free entries */
+#define RVU_CPT_INST_QLEN_EXTRA_BYTES (320 * RVU_CPT_INST_SIZE)
+#define RVU_CPT_EXTRA_SIZE_DIV40 (320 / 40)
+
+/* CPT instruction queue length in bytes */
+#define RVU_CPT_INST_QLEN_BYTES \
+ ((RVU_CPT_SIZE_DIV40 * 40 * RVU_CPT_INST_SIZE) + \
+ RVU_CPT_INST_QLEN_EXTRA_BYTES)
+
+/* CPT instruction group queue length in bytes */
+#define RVU_CPT_INST_GRP_QLEN_BYTES \
+ ((RVU_CPT_SIZE_DIV40 + RVU_CPT_EXTRA_SIZE_DIV40) * 16)
+
+/* CPT FC length in bytes */
+#define RVU_CPT_Q_FC_LEN 128
+
+/* CPT LF_Q_SIZE Register */
+#define CPT_LF_Q_SIZE_DIV40 GENMASK_ULL(14, 0)
+
+/* CPT invalid engine group num */
+#define OTX2_CPT_INVALID_CRYPTO_ENG_GRP 0xFF
+
+/* Fastpath ipsec opcode with inplace processing */
+#define OTX2_CPT_INLINE_RX_OPCODE (0x26 | (1 << 6))
+#define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6))
+
+/* Calculate CPT register offset */
+#define CPT_RVU_FUNC_ADDR_S(blk, slot, offs) \
+ (((blk) << 20) | ((slot) << 12) | (offs))
+
+static inline void otx2_cpt_write64(void __iomem *reg_base, u64 blk, u64 slot,
+ u64 offs, u64 val)
+{
+ writeq_relaxed(val, reg_base + CPT_RVU_FUNC_ADDR_S(blk, slot, offs));
+}
+
+static inline u64 otx2_cpt_read64(void __iomem *reg_base, u64 blk, u64 slot,
+ u64 offs)
+{
+ return readq_relaxed(reg_base + CPT_RVU_FUNC_ADDR_S(blk, slot, offs));
+}
+#endif // RVU_CPT_H
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 613655fcd34f..6bd995c45dad 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -5486,8 +5486,8 @@ int rvu_mbox_handler_nix_lso_format_cfg(struct rvu *rvu,
#define CPT_INST_CREDIT_BPID GENMASK_ULL(30, 22)
#define CPT_INST_CREDIT_CNT GENMASK_ULL(21, 0)
-static void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req,
- int blkaddr)
+void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req,
+ int blkaddr)
{
u8 cpt_idx, cpt_blkaddr;
u64 val;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index 62cdc714ba57..a982cffdc5f5 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -563,6 +563,11 @@
#define CPT_LF_CTL 0x10
#define CPT_LF_INPROG 0x40
+#define CPT_LF_MISC_INT 0xb0
+#define CPT_LF_MISC_INT_ENA_W1S 0xb0
+#define CPT_LF_MISC_INT_ENA_W1C 0xb0
+#define CPT_LF_MISC_INT_MASK 0x6e
+#define CPT_LF_Q_BASE 0xf0
#define CPT_LF_Q_SIZE 0x100
#define CPT_LF_Q_INST_PTR 0x110
#define CPT_LF_Q_GRP_PTR 0x120
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 03/15] octeontx2-af: Setup Large Memory Transaction for crypto
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 01/15] crypto: octeontx2: Share engine group info with AF driver Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 02/15] octeontx2-af: Configure crypto hardware for inline ipsec Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 04/15] octeontx2-af: Handle inbound inline ipsec config in AF Tanmay Jagdale
` (12 subsequent siblings)
15 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
From: Bharat Bhushan <bbhushan2@marvell.com>
Large Memory Transaction store (LMTST) operation is required
for enqueuing workto CPT hardware. An LMTST operation makes
one or more 128-byte write operation to normal, cacheable
memory region. This patch setup LMTST memory region for
enqueuing work to CPT hardware.
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../net/ethernet/marvell/octeontx2/af/rvu.c | 1 +
.../net/ethernet/marvell/octeontx2/af/rvu.h | 7 +++
.../ethernet/marvell/octeontx2/af/rvu_cpt.c | 51 +++++++++++++++++++
.../ethernet/marvell/octeontx2/af/rvu_cpt.h | 4 ++
4 files changed, 63 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index d9f000cda5e5..ea346e59835b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -731,6 +731,7 @@ static void rvu_free_hw_resources(struct rvu *rvu)
rvu_npa_freemem(rvu);
rvu_npc_freemem(rvu);
rvu_nix_freemem(rvu);
+ rvu_cpt_freemem(rvu);
/* Free block LF bitmaps */
for (id = 0; id < BLK_COUNT; id++) {
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 6923fd756b19..6551fdb612dc 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -557,6 +557,12 @@ struct rvu_cpt {
struct rvu_cpt_inst_queue cpt0_iq;
struct rvu_cpt_inst_queue cpt1_iq;
struct rvu_cpt_rx_inline_lf_cfg rx_cfg;
+
+ /* CPT LMTST */
+ void *lmt_base;
+ u64 lmt_addr;
+ size_t lmt_size;
+ dma_addr_t lmt_iova;
};
struct rvu {
@@ -1086,6 +1092,7 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf,
int slot);
int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc);
int rvu_cpt_init(struct rvu *rvu);
+void rvu_cpt_freemem(struct rvu *rvu);
#define NDC_AF_BANK_MASK GENMASK_ULL(7, 0)
#define NDC_AF_BANK_LINE_MASK GENMASK_ULL(31, 16)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
index 89e0739ba414..8ed56ac512ef 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
@@ -1874,10 +1874,46 @@ int rvu_mbox_handler_cpt_rx_inline_lf_cfg(struct rvu *rvu,
#define MAX_RXC_ICB_CNT GENMASK_ULL(40, 32)
+static int rvu_cpt_lmt_init(struct rvu *rvu)
+{
+ struct lmtst_tbl_setup_req req;
+ dma_addr_t iova;
+ void *base;
+ int size;
+ int err;
+
+ if (is_rvu_otx2(rvu))
+ return 0;
+
+ memset(&req, 0, sizeof(struct lmtst_tbl_setup_req));
+
+ size = LMT_LINE_SIZE * LMT_BURST_SIZE + OTX2_ALIGN;
+ base = dma_alloc_attrs(rvu->dev, size, &iova, GFP_ATOMIC,
+ DMA_ATTR_FORCE_CONTIGUOUS);
+ if (!base)
+ return -ENOMEM;
+
+ req.lmt_iova = ALIGN(iova, OTX2_ALIGN);
+ req.use_local_lmt_region = true;
+ err = rvu_mbox_handler_lmtst_tbl_setup(rvu, &req, NULL);
+ if (err) {
+ dma_free_attrs(rvu->dev, size, base, iova,
+ DMA_ATTR_FORCE_CONTIGUOUS);
+ return err;
+ }
+
+ rvu->rvu_cpt.lmt_addr = (__force u64)PTR_ALIGN(base, OTX2_ALIGN);
+ rvu->rvu_cpt.lmt_base = base;
+ rvu->rvu_cpt.lmt_size = size;
+ rvu->rvu_cpt.lmt_iova = iova;
+ return 0;
+}
+
int rvu_cpt_init(struct rvu *rvu)
{
struct rvu_hwinfo *hw = rvu->hw;
u64 reg_val;
+ int ret;
/* Retrieve CPT PF number */
rvu->cpt_pf_num = get_cpt_pf_num(rvu);
@@ -1898,6 +1934,21 @@ int rvu_cpt_init(struct rvu *rvu)
spin_lock_init(&rvu->cpt_intr_lock);
+ ret = rvu_cpt_lmt_init(rvu);
+ if (ret)
+ return ret;
+
mutex_init(&rvu->rvu_cpt.lock);
return 0;
}
+
+void rvu_cpt_freemem(struct rvu *rvu)
+{
+ if (is_rvu_otx2(rvu))
+ return;
+
+ if (rvu->rvu_cpt.lmt_base)
+ dma_free_attrs(rvu->dev, rvu->rvu_cpt.lmt_size,
+ rvu->rvu_cpt.lmt_base, rvu->rvu_cpt.lmt_iova,
+ DMA_ATTR_FORCE_CONTIGUOUS);
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
index 4b57c7038d6c..e6fa247a03ba 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
@@ -49,6 +49,10 @@
#define OTX2_CPT_INLINE_RX_OPCODE (0x26 | (1 << 6))
#define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6))
+/* CPT LMTST */
+#define LMT_LINE_SIZE 128 /* LMT line size in bytes */
+#define LMT_BURST_SIZE 32 /* 32 LMTST lines for burst */
+
/* Calculate CPT register offset */
#define CPT_RVU_FUNC_ADDR_S(blk, slot, offs) \
(((blk) << 20) | ((slot) << 12) | (offs))
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 04/15] octeontx2-af: Handle inbound inline ipsec config in AF
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (2 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 03/15] octeontx2-af: Setup Large Memory Transaction for crypto Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-07 9:19 ` Simon Horman
2025-05-02 13:19 ` [net-next PATCH v1 05/15] crypto: octeontx2: Remove inbound inline ipsec config Tanmay Jagdale
` (11 subsequent siblings)
15 siblings, 1 reply; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
From: Bharat Bhushan <bbhushan2@marvell.com>
Now CPT context flush can be handled in AF as CPT LF
can be attached to it. With that AF driver can completely
handle inbound inline ipsec configuration mailbox, so
forward this mailbox to AF driver.
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../marvell/octeontx2/otx2_cpt_common.h | 1 -
.../marvell/octeontx2/otx2_cptpf_mbox.c | 3 -
.../net/ethernet/marvell/octeontx2/af/mbox.h | 2 +
.../ethernet/marvell/octeontx2/af/rvu_cpt.c | 67 +++++++++----------
.../ethernet/marvell/octeontx2/af/rvu_reg.h | 1 +
5 files changed, 34 insertions(+), 40 deletions(-)
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
index df735eab8f08..27a2dd997f73 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
+++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
@@ -33,7 +33,6 @@
#define BAD_OTX2_CPT_ENG_TYPE OTX2_CPT_MAX_ENG_TYPES
/* Take mbox id from end of CPT mbox range in AF (range 0xA00 - 0xBFF) */
-#define MBOX_MSG_RX_INLINE_IPSEC_LF_CFG 0xBFE
#define MBOX_MSG_GET_ENG_GRP_NUM 0xBFF
#define MBOX_MSG_GET_CAPS 0xBFD
#define MBOX_MSG_GET_KVF_LIMITS 0xBFC
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
index 5e6f70ac35a7..222419bd5ac9 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
@@ -326,9 +326,6 @@ static int cptpf_handle_vf_req(struct otx2_cptpf_dev *cptpf,
case MBOX_MSG_GET_KVF_LIMITS:
err = handle_msg_kvf_limits(cptpf, vf, req);
break;
- case MBOX_MSG_RX_INLINE_IPSEC_LF_CFG:
- err = handle_msg_rx_inline_ipsec_lf_cfg(cptpf, req);
- break;
default:
err = forward_to_af(cptpf, vf, req, size);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 8540a04a92f9..ad74a27888da 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -213,6 +213,8 @@ M(CPT_FLT_ENG_INFO, 0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req, \
cpt_flt_eng_info_rsp) \
M(CPT_SET_ENG_GRP_NUM, 0xA0A, cpt_set_eng_grp_num, cpt_set_egrp_num, \
msg_rsp) \
+M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, cpt_rx_inline_lf_cfg_msg, \
+ msg_rsp) \
/* SDP mbox IDs (range 0x1000 - 0x11FF) */ \
M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \
M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
index 8ed56ac512ef..2e8ac71979ae 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
@@ -12,6 +12,7 @@
#include "mbox.h"
#include "rvu.h"
#include "rvu_cpt.h"
+#include <linux/soc/marvell/octeontx2/asm.h>
/* CPT PF device id */
#define PCI_DEVID_OTX2_CPT_PF 0xA0FD
@@ -26,6 +27,10 @@
/* Default CPT_AF_RXC_CFG1:max_rxc_icb_cnt */
#define CPT_DFLT_MAX_RXC_ICB_CNT 0xC0ULL
+/* CPT LMTST */
+#define LMT_LINE_SIZE 128 /* LMT line size in bytes */
+#define LMT_BURST_SIZE 32 /* 32 LMTST lines for burst */
+
#define cpt_get_eng_sts(e_min, e_max, rsp, etype) \
({ \
u64 free_sts = 0, busy_sts = 0; \
@@ -1253,20 +1258,36 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s
return 0;
}
+static void cn10k_cpt_inst_flush(struct rvu *rvu, u64 *inst, u64 size)
+{
+ u64 val = 0, tar_addr = 0;
+ void __iomem *io_addr;
+ u64 blkaddr = BLKADDR_CPT0;
+
+ io_addr = rvu->pfreg_base + CPT_RVU_FUNC_ADDR_S(blkaddr, 0, CPT_LF_NQX);
+
+ /* Target address for LMTST flush tells HW how many 128bit
+ * words are present.
+ * tar_addr[6:4] size of first LMTST - 1 in units of 128b.
+ */
+ tar_addr |= (__force u64)io_addr | (((size / 16) - 1) & 0x7) << 4;
+ dma_wmb();
+ memcpy((u64 *)rvu->rvu_cpt.lmt_addr, inst, size);
+ cn10k_lmt_flush(val, tar_addr);
+ dma_wmb();
+}
+
#define CPT_RES_LEN 16
#define CPT_SE_IE_EGRP 1ULL
static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
int nix_blkaddr)
{
- int cpt_pf_num = rvu->cpt_pf_num;
- struct cpt_inst_lmtst_req *req;
dma_addr_t res_daddr;
int timeout = 3000;
u8 cpt_idx;
- u64 *inst;
+ u64 inst[8];
u16 *res;
- int rc;
res = kzalloc(CPT_RES_LEN, GFP_KERNEL);
if (!res)
@@ -1276,24 +1297,11 @@ static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(rvu->dev, res_daddr)) {
dev_err(rvu->dev, "DMA mapping failed for CPT result\n");
- rc = -EFAULT;
- goto res_free;
+ kfree(res);
+ return -EFAULT;
}
*res = 0xFFFF;
- /* Send mbox message to CPT PF */
- req = (struct cpt_inst_lmtst_req *)
- otx2_mbox_alloc_msg_rsp(&rvu->afpf_wq_info.mbox_up,
- cpt_pf_num, sizeof(*req),
- sizeof(struct msg_rsp));
- if (!req) {
- rc = -ENOMEM;
- goto res_daddr_unmap;
- }
- req->hdr.sig = OTX2_MBOX_REQ_SIG;
- req->hdr.id = MBOX_MSG_CPT_INST_LMTST;
-
- inst = req->inst;
/* Prepare CPT_INST_S */
inst[0] = 0;
inst[1] = res_daddr;
@@ -1314,11 +1322,8 @@ static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
rvu_write64(rvu, nix_blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx),
BIT_ULL(22) - 1);
- otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, cpt_pf_num);
- rc = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, cpt_pf_num);
- if (rc)
- dev_warn(rvu->dev, "notification to pf %d failed\n",
- cpt_pf_num);
+ cn10k_cpt_inst_flush(rvu, inst, 64);
+
/* Wait for CPT instruction to be completed */
do {
mdelay(1);
@@ -1331,11 +1336,8 @@ static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
if (timeout == 0)
dev_warn(rvu->dev, "Poll for result hits hard loop counter\n");
-res_daddr_unmap:
dma_unmap_single(rvu->dev, res_daddr, CPT_RES_LEN, DMA_BIDIRECTIONAL);
-res_free:
kfree(res);
-
return 0;
}
@@ -1381,23 +1383,16 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc)
goto unlock;
}
- /* Enable BAR2 ALIAS for this pcifunc. */
- reg = BIT_ULL(16) | pcifunc;
- rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg);
-
for (i = 0; i < max_ctx_entries; i++) {
cam_data = rvu_read64(rvu, blkaddr, CPT_AF_CTX_CAM_DATA(i));
if ((FIELD_GET(CTX_CAM_PF_FUNC, cam_data) == pcifunc) &&
FIELD_GET(CTX_CAM_CPTR, cam_data)) {
reg = BIT_ULL(46) | FIELD_GET(CTX_CAM_CPTR, cam_data);
- rvu_write64(rvu, blkaddr,
- CPT_AF_BAR2_ALIASX(slot, CPT_LF_CTX_FLUSH),
- reg);
+ otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot,
+ CPT_LF_CTX_FLUSH, reg);
}
}
- rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0);
-
unlock:
mutex_unlock(&rvu->rsrc_lock);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index a982cffdc5f5..245e69fcbff9 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -571,6 +571,7 @@
#define CPT_LF_Q_SIZE 0x100
#define CPT_LF_Q_INST_PTR 0x110
#define CPT_LF_Q_GRP_PTR 0x120
+#define CPT_LF_NQX 0x400
#define CPT_LF_CTX_FLUSH 0x510
#define NPC_AF_BLK_RST (0x00040)
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 05/15] crypto: octeontx2: Remove inbound inline ipsec config
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (3 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 04/15] octeontx2-af: Handle inbound inline ipsec config in AF Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass Tanmay Jagdale
` (10 subsequent siblings)
15 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
From: Bharat Bhushan <bbhushan2@marvell.com>
Now AF driver can handle all inbound inline ipsec
configuration so remove this code from CPT driver.
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
drivers/crypto/marvell/octeontx2/otx2_cptpf.h | 10 -
.../marvell/octeontx2/otx2_cptpf_main.c | 46 ---
.../marvell/octeontx2/otx2_cptpf_mbox.c | 282 +-----------------
.../net/ethernet/marvell/octeontx2/af/mbox.h | 11 -
.../ethernet/marvell/octeontx2/af/rvu_cpt.c | 4 -
5 files changed, 2 insertions(+), 351 deletions(-)
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h
index e5859a1e1c60..b7d1298e2b85 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h
@@ -41,9 +41,6 @@ struct otx2_cptpf_dev {
struct work_struct afpf_mbox_work;
struct workqueue_struct *afpf_mbox_wq;
- struct otx2_mbox afpf_mbox_up;
- struct work_struct afpf_mbox_up_work;
-
/* VF <=> PF mbox */
struct otx2_mbox vfpf_mbox;
struct workqueue_struct *vfpf_mbox_wq;
@@ -56,10 +53,8 @@ struct otx2_cptpf_dev {
u8 pf_id; /* RVU PF number */
u8 max_vfs; /* Maximum number of VFs supported by CPT */
u8 enabled_vfs; /* Number of enabled VFs */
- u8 sso_pf_func_ovrd; /* SSO PF_FUNC override bit */
u8 kvf_limits; /* Kernel crypto limits */
bool has_cpt1;
- u8 rsrc_req_blkaddr;
/* Devlink */
struct devlink *dl;
@@ -67,12 +62,7 @@ struct otx2_cptpf_dev {
irqreturn_t otx2_cptpf_afpf_mbox_intr(int irq, void *arg);
void otx2_cptpf_afpf_mbox_handler(struct work_struct *work);
-void otx2_cptpf_afpf_mbox_up_handler(struct work_struct *work);
irqreturn_t otx2_cptpf_vfpf_mbox_intr(int irq, void *arg);
void otx2_cptpf_vfpf_mbox_handler(struct work_struct *work);
-int otx2_inline_cptlf_setup(struct otx2_cptpf_dev *cptpf,
- struct otx2_cptlfs_info *lfs, u8 egrp, int num_lfs);
-void otx2_inline_cptlf_cleanup(struct otx2_cptlfs_info *lfs);
-
#endif /* __OTX2_CPTPF_H */
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
index 8a7ed0152371..34dbfea7f974 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
@@ -13,7 +13,6 @@
#define OTX2_CPT_DRV_NAME "rvu_cptpf"
#define OTX2_CPT_DRV_STRING "Marvell RVU CPT Physical Function Driver"
-#define CPT_UC_RID_CN9K_B0 1
#define CPT_UC_RID_CN10K_A 4
#define CPT_UC_RID_CN10K_B 5
@@ -477,19 +476,10 @@ static int cptpf_afpf_mbox_init(struct otx2_cptpf_dev *cptpf)
if (err)
goto error;
- err = otx2_mbox_init(&cptpf->afpf_mbox_up, cptpf->afpf_mbox_base,
- pdev, cptpf->reg_base, MBOX_DIR_PFAF_UP, 1);
- if (err)
- goto mbox_cleanup;
-
INIT_WORK(&cptpf->afpf_mbox_work, otx2_cptpf_afpf_mbox_handler);
- INIT_WORK(&cptpf->afpf_mbox_up_work, otx2_cptpf_afpf_mbox_up_handler);
mutex_init(&cptpf->lock);
-
return 0;
-mbox_cleanup:
- otx2_mbox_destroy(&cptpf->afpf_mbox);
error:
destroy_workqueue(cptpf->afpf_mbox_wq);
return err;
@@ -499,33 +489,6 @@ static void cptpf_afpf_mbox_destroy(struct otx2_cptpf_dev *cptpf)
{
destroy_workqueue(cptpf->afpf_mbox_wq);
otx2_mbox_destroy(&cptpf->afpf_mbox);
- otx2_mbox_destroy(&cptpf->afpf_mbox_up);
-}
-
-static ssize_t sso_pf_func_ovrd_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- struct otx2_cptpf_dev *cptpf = dev_get_drvdata(dev);
-
- return sprintf(buf, "%d\n", cptpf->sso_pf_func_ovrd);
-}
-
-static ssize_t sso_pf_func_ovrd_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t count)
-{
- struct otx2_cptpf_dev *cptpf = dev_get_drvdata(dev);
- u8 sso_pf_func_ovrd;
-
- if (!(cptpf->pdev->revision == CPT_UC_RID_CN9K_B0))
- return count;
-
- if (kstrtou8(buf, 0, &sso_pf_func_ovrd))
- return -EINVAL;
-
- cptpf->sso_pf_func_ovrd = sso_pf_func_ovrd;
-
- return count;
}
static ssize_t kvf_limits_show(struct device *dev,
@@ -558,11 +521,9 @@ static ssize_t kvf_limits_store(struct device *dev,
}
static DEVICE_ATTR_RW(kvf_limits);
-static DEVICE_ATTR_RW(sso_pf_func_ovrd);
static struct attribute *cptpf_attrs[] = {
&dev_attr_kvf_limits.attr,
- &dev_attr_sso_pf_func_ovrd.attr,
NULL
};
@@ -833,13 +794,6 @@ static void otx2_cptpf_remove(struct pci_dev *pdev)
cptpf_sriov_disable(pdev);
otx2_cpt_unregister_dl(cptpf);
- /* Cleanup Inline CPT LF's if attached */
- if (cptpf->lfs.lfs_num)
- otx2_inline_cptlf_cleanup(&cptpf->lfs);
-
- if (cptpf->cpt1_lfs.lfs_num)
- otx2_inline_cptlf_cleanup(&cptpf->cpt1_lfs);
-
/* Delete sysfs entry created for kernel VF limits */
sysfs_remove_group(&pdev->dev.kobj, &cptpf_sysfs_group);
/* Cleanup engine groups */
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
index 222419bd5ac9..6b2881b534f5 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
@@ -5,20 +5,6 @@
#include "otx2_cptpf.h"
#include "rvu_reg.h"
-/* Fastpath ipsec opcode with inplace processing */
-#define CPT_INLINE_RX_OPCODE (0x26 | (1 << 6))
-#define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6))
-
-#define cpt_inline_rx_opcode(pdev) \
-({ \
- u8 opcode; \
- if (is_dev_otx2(pdev)) \
- opcode = CPT_INLINE_RX_OPCODE; \
- else \
- opcode = CN10K_CPT_INLINE_RX_OPCODE; \
- (opcode); \
-})
-
/*
* CPT PF driver version, It will be incremented by 1 for every feature
* addition in CPT mailbox messages.
@@ -126,186 +112,6 @@ static int handle_msg_kvf_limits(struct otx2_cptpf_dev *cptpf,
return 0;
}
-static int send_inline_ipsec_inbound_msg(struct otx2_cptpf_dev *cptpf,
- int sso_pf_func, u8 slot)
-{
- struct cpt_inline_ipsec_cfg_msg *req;
- struct pci_dev *pdev = cptpf->pdev;
-
- req = (struct cpt_inline_ipsec_cfg_msg *)
- otx2_mbox_alloc_msg_rsp(&cptpf->afpf_mbox, 0,
- sizeof(*req), sizeof(struct msg_rsp));
- if (req == NULL) {
- dev_err(&pdev->dev, "RVU MBOX failed to get message.\n");
- return -EFAULT;
- }
- memset(req, 0, sizeof(*req));
- req->hdr.id = MBOX_MSG_CPT_INLINE_IPSEC_CFG;
- req->hdr.sig = OTX2_MBOX_REQ_SIG;
- req->hdr.pcifunc = OTX2_CPT_RVU_PFFUNC(cptpf->pf_id, 0);
- req->dir = CPT_INLINE_INBOUND;
- req->slot = slot;
- req->sso_pf_func_ovrd = cptpf->sso_pf_func_ovrd;
- req->sso_pf_func = sso_pf_func;
- req->enable = 1;
-
- return otx2_cpt_send_mbox_msg(&cptpf->afpf_mbox, pdev);
-}
-
-static int rx_inline_ipsec_lf_cfg(struct otx2_cptpf_dev *cptpf, u8 egrp,
- struct otx2_cpt_rx_inline_lf_cfg *req)
-{
- struct nix_inline_ipsec_cfg *nix_req;
- struct pci_dev *pdev = cptpf->pdev;
- int ret;
-
- nix_req = (struct nix_inline_ipsec_cfg *)
- otx2_mbox_alloc_msg_rsp(&cptpf->afpf_mbox, 0,
- sizeof(*nix_req),
- sizeof(struct msg_rsp));
- if (nix_req == NULL) {
- dev_err(&pdev->dev, "RVU MBOX failed to get message.\n");
- return -EFAULT;
- }
- memset(nix_req, 0, sizeof(*nix_req));
- nix_req->hdr.id = MBOX_MSG_NIX_INLINE_IPSEC_CFG;
- nix_req->hdr.sig = OTX2_MBOX_REQ_SIG;
- nix_req->enable = 1;
- nix_req->credit_th = req->credit_th;
- nix_req->bpid = req->bpid;
- if (!req->credit || req->credit > OTX2_CPT_INST_QLEN_MSGS)
- nix_req->cpt_credit = OTX2_CPT_INST_QLEN_MSGS - 1;
- else
- nix_req->cpt_credit = req->credit - 1;
- nix_req->gen_cfg.egrp = egrp;
- if (req->opcode)
- nix_req->gen_cfg.opcode = req->opcode;
- else
- nix_req->gen_cfg.opcode = cpt_inline_rx_opcode(pdev);
- nix_req->gen_cfg.param1 = req->param1;
- nix_req->gen_cfg.param2 = req->param2;
- nix_req->inst_qsel.cpt_pf_func = OTX2_CPT_RVU_PFFUNC(cptpf->pf_id, 0);
- nix_req->inst_qsel.cpt_slot = 0;
- ret = otx2_cpt_send_mbox_msg(&cptpf->afpf_mbox, pdev);
- if (ret)
- return ret;
-
- if (cptpf->has_cpt1) {
- ret = send_inline_ipsec_inbound_msg(cptpf, req->sso_pf_func, 1);
- if (ret)
- return ret;
- }
-
- return send_inline_ipsec_inbound_msg(cptpf, req->sso_pf_func, 0);
-}
-
-int
-otx2_inline_cptlf_setup(struct otx2_cptpf_dev *cptpf,
- struct otx2_cptlfs_info *lfs, u8 egrp, int num_lfs)
-{
- int ret;
-
- ret = otx2_cptlf_init(lfs, 1 << egrp, OTX2_CPT_QUEUE_HI_PRIO, 1);
- if (ret) {
- dev_err(&cptpf->pdev->dev,
- "LF configuration failed for RX inline ipsec.\n");
- return ret;
- }
-
- /* Get msix offsets for attached LFs */
- ret = otx2_cpt_msix_offset_msg(lfs);
- if (ret)
- goto cleanup_lf;
-
- /* Register for CPT LF Misc interrupts */
- ret = otx2_cptlf_register_misc_interrupts(lfs);
- if (ret)
- goto free_irq;
-
- return 0;
-free_irq:
- otx2_cptlf_unregister_misc_interrupts(lfs);
-cleanup_lf:
- otx2_cptlf_shutdown(lfs);
- return ret;
-}
-
-void
-otx2_inline_cptlf_cleanup(struct otx2_cptlfs_info *lfs)
-{
- /* Unregister misc interrupt */
- otx2_cptlf_unregister_misc_interrupts(lfs);
-
- /* Cleanup LFs */
- otx2_cptlf_shutdown(lfs);
-}
-
-static int handle_msg_rx_inline_ipsec_lf_cfg(struct otx2_cptpf_dev *cptpf,
- struct mbox_msghdr *req)
-{
- struct otx2_cpt_rx_inline_lf_cfg *cfg_req;
- int num_lfs = 1, ret;
- u8 egrp;
-
- cfg_req = (struct otx2_cpt_rx_inline_lf_cfg *)req;
- if (cptpf->lfs.lfs_num) {
- dev_err(&cptpf->pdev->dev,
- "LF is already configured for RX inline ipsec.\n");
- return -EEXIST;
- }
- /*
- * Allow LFs to execute requests destined to only grp IE_TYPES and
- * set queue priority of each LF to high
- */
- egrp = otx2_cpt_get_eng_grp(&cptpf->eng_grps, OTX2_CPT_IE_TYPES);
- if (egrp == OTX2_CPT_INVALID_CRYPTO_ENG_GRP) {
- dev_err(&cptpf->pdev->dev,
- "Engine group for inline ipsec is not available\n");
- return -ENOENT;
- }
-
- otx2_cptlf_set_dev_info(&cptpf->lfs, cptpf->pdev, cptpf->reg_base,
- &cptpf->afpf_mbox, BLKADDR_CPT0);
- cptpf->lfs.global_slot = 0;
- cptpf->lfs.ctx_ilen_ovrd = cfg_req->ctx_ilen_valid;
- cptpf->lfs.ctx_ilen = cfg_req->ctx_ilen;
-
- ret = otx2_inline_cptlf_setup(cptpf, &cptpf->lfs, egrp, num_lfs);
- if (ret) {
- dev_err(&cptpf->pdev->dev, "Inline-Ipsec CPT0 LF setup failed.\n");
- return ret;
- }
-
- if (cptpf->has_cpt1) {
- cptpf->rsrc_req_blkaddr = BLKADDR_CPT1;
- otx2_cptlf_set_dev_info(&cptpf->cpt1_lfs, cptpf->pdev,
- cptpf->reg_base, &cptpf->afpf_mbox,
- BLKADDR_CPT1);
- cptpf->cpt1_lfs.global_slot = num_lfs;
- cptpf->cpt1_lfs.ctx_ilen_ovrd = cfg_req->ctx_ilen_valid;
- cptpf->cpt1_lfs.ctx_ilen = cfg_req->ctx_ilen;
- ret = otx2_inline_cptlf_setup(cptpf, &cptpf->cpt1_lfs, egrp,
- num_lfs);
- if (ret) {
- dev_err(&cptpf->pdev->dev, "Inline CPT1 LF setup failed.\n");
- goto lf_cleanup;
- }
- cptpf->rsrc_req_blkaddr = 0;
- }
-
- ret = rx_inline_ipsec_lf_cfg(cptpf, egrp, cfg_req);
- if (ret)
- goto lf1_cleanup;
-
- return 0;
-
-lf1_cleanup:
- otx2_inline_cptlf_cleanup(&cptpf->cpt1_lfs);
-lf_cleanup:
- otx2_inline_cptlf_cleanup(&cptpf->lfs);
- return ret;
-}
-
static int cptpf_handle_vf_req(struct otx2_cptpf_dev *cptpf,
struct otx2_cptvf_info *vf,
struct mbox_msghdr *req, int size)
@@ -419,28 +225,14 @@ void otx2_cptpf_vfpf_mbox_handler(struct work_struct *work)
irqreturn_t otx2_cptpf_afpf_mbox_intr(int __always_unused irq, void *arg)
{
struct otx2_cptpf_dev *cptpf = arg;
- struct otx2_mbox_dev *mdev;
- struct otx2_mbox *mbox;
- struct mbox_hdr *hdr;
u64 intr;
/* Read the interrupt bits */
intr = otx2_cpt_read64(cptpf->reg_base, BLKADDR_RVUM, 0, RVU_PF_INT);
if (intr & 0x1ULL) {
- mbox = &cptpf->afpf_mbox;
- mdev = &mbox->dev[0];
- hdr = mdev->mbase + mbox->rx_start;
- if (hdr->num_msgs)
- /* Schedule work queue function to process the MBOX request */
- queue_work(cptpf->afpf_mbox_wq, &cptpf->afpf_mbox_work);
-
- mbox = &cptpf->afpf_mbox_up;
- mdev = &mbox->dev[0];
- hdr = mdev->mbase + mbox->rx_start;
- if (hdr->num_msgs)
- /* Schedule work queue function to process the MBOX request */
- queue_work(cptpf->afpf_mbox_wq, &cptpf->afpf_mbox_up_work);
+ /* Schedule work queue function to process the MBOX request */
+ queue_work(cptpf->afpf_mbox_wq, &cptpf->afpf_mbox_work);
/* Clear and ack the interrupt */
otx2_cpt_write64(cptpf->reg_base, BLKADDR_RVUM, 0, RVU_PF_INT,
0x1ULL);
@@ -466,8 +258,6 @@ static void process_afpf_mbox_msg(struct otx2_cptpf_dev *cptpf,
msg->sig, msg->id);
return;
}
- if (cptpf->rsrc_req_blkaddr == BLKADDR_CPT1)
- lfs = &cptpf->cpt1_lfs;
switch (msg->id) {
case MBOX_MSG_READY:
@@ -594,71 +384,3 @@ void otx2_cptpf_afpf_mbox_handler(struct work_struct *work)
}
otx2_mbox_reset(afpf_mbox, 0);
}
-
-static void handle_msg_cpt_inst_lmtst(struct otx2_cptpf_dev *cptpf,
- struct mbox_msghdr *msg)
-{
- struct cpt_inst_lmtst_req *req = (struct cpt_inst_lmtst_req *)msg;
- struct otx2_cptlfs_info *lfs = &cptpf->lfs;
- struct msg_rsp *rsp;
-
- if (cptpf->lfs.lfs_num)
- lfs->ops->send_cmd((union otx2_cpt_inst_s *)req->inst, 1,
- &lfs->lf[0]);
-
- rsp = (struct msg_rsp *)otx2_mbox_alloc_msg(&cptpf->afpf_mbox_up, 0,
- sizeof(*rsp));
- if (!rsp)
- return;
-
- rsp->hdr.id = msg->id;
- rsp->hdr.sig = OTX2_MBOX_RSP_SIG;
- rsp->hdr.pcifunc = 0;
- rsp->hdr.rc = 0;
-}
-
-static void process_afpf_mbox_up_msg(struct otx2_cptpf_dev *cptpf,
- struct mbox_msghdr *msg)
-{
- if (msg->id >= MBOX_MSG_MAX) {
- dev_err(&cptpf->pdev->dev,
- "MBOX msg with unknown ID %d\n", msg->id);
- return;
- }
-
- switch (msg->id) {
- case MBOX_MSG_CPT_INST_LMTST:
- handle_msg_cpt_inst_lmtst(cptpf, msg);
- break;
- default:
- otx2_reply_invalid_msg(&cptpf->afpf_mbox_up, 0, 0, msg->id);
- }
-}
-
-void otx2_cptpf_afpf_mbox_up_handler(struct work_struct *work)
-{
- struct otx2_cptpf_dev *cptpf;
- struct otx2_mbox_dev *mdev;
- struct mbox_hdr *rsp_hdr;
- struct mbox_msghdr *msg;
- struct otx2_mbox *mbox;
- int offset, i;
-
- cptpf = container_of(work, struct otx2_cptpf_dev, afpf_mbox_up_work);
- mbox = &cptpf->afpf_mbox_up;
- mdev = &mbox->dev[0];
- /* Sync mbox data into memory */
- smp_wmb();
-
- rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
- offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
-
- for (i = 0; i < rsp_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)(mdev->mbase + offset);
-
- process_afpf_mbox_up_msg(cptpf, msg);
-
- offset = mbox->rx_start + msg->next_msgoff;
- }
- otx2_mbox_msg_send(mbox, 0);
-}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index ad74a27888da..f9321084abb6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -386,9 +386,6 @@ M(MCS_CUSTOM_TAG_CFG_GET, 0xa021, mcs_custom_tag_cfg_get, \
#define MBOX_UP_CGX_MESSAGES \
M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, msg_rsp)
-#define MBOX_UP_CPT_MESSAGES \
-M(CPT_INST_LMTST, 0xD00, cpt_inst_lmtst, cpt_inst_lmtst_req, msg_rsp)
-
#define MBOX_UP_MCS_MESSAGES \
M(MCS_INTR_NOTIFY, 0xE00, mcs_intr_notify, mcs_intr_info, msg_rsp)
@@ -399,7 +396,6 @@ enum {
#define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id,
MBOX_MESSAGES
MBOX_UP_CGX_MESSAGES
-MBOX_UP_CPT_MESSAGES
MBOX_UP_MCS_MESSAGES
MBOX_UP_REP_MESSAGES
#undef M
@@ -1915,13 +1911,6 @@ struct cpt_rxc_time_cfg_req {
u16 active_limit;
};
-/* Mailbox message request format to request for CPT_INST_S lmtst. */
-struct cpt_inst_lmtst_req {
- struct mbox_msghdr hdr;
- u64 inst[8];
- u64 rsvd;
-};
-
/* Mailbox message format to request for CPT LF reset */
struct cpt_lf_rst_req {
struct mbox_msghdr hdr;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
index 2e8ac71979ae..c7e46e77eab0 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
@@ -704,10 +704,6 @@ int rvu_mbox_handler_cpt_inline_ipsec_cfg(struct rvu *rvu,
return CPT_AF_ERR_LF_INVALID;
switch (req->dir) {
- case CPT_INLINE_INBOUND:
- ret = cpt_inline_ipsec_cfg_inbound(rvu, blkaddr, cptlf, req);
- break;
-
case CPT_INLINE_OUTBOUND:
ret = cpt_inline_ipsec_cfg_outbound(rvu, blkaddr, cptlf, req);
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (4 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 05/15] crypto: octeontx2: Remove inbound inline ipsec config Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-07 7:58 ` kernel test robot
2025-05-07 12:36 ` Simon Horman
2025-05-02 13:19 ` [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation Tanmay Jagdale
` (9 subsequent siblings)
15 siblings, 2 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Rakesh Kudurumalla, Tanmay Jagdale
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Implemented mailbox to add mechanism to allocate a
rq_mask and apply to nixlf to toggle RQ context fields
for CPT second pass packets.
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../net/ethernet/marvell/octeontx2/af/mbox.h | 23 ++++
.../net/ethernet/marvell/octeontx2/af/rvu.h | 7 +
.../ethernet/marvell/octeontx2/af/rvu_cn10k.c | 11 ++
.../ethernet/marvell/octeontx2/af/rvu_nix.c | 120 ++++++++++++++++++
.../ethernet/marvell/octeontx2/af/rvu_reg.h | 6 +
.../marvell/octeontx2/af/rvu_struct.h | 4 +-
6 files changed, 170 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index f9321084abb6..715efcc04c9e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -323,6 +323,9 @@ M(NIX_CPT_BP_DISABLE, 0x8021, nix_cpt_bp_disable, nix_bp_cfg_req, \
msg_rsp) \
M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg, \
msg_req, nix_inline_ipsec_cfg) \
+M(NIX_LF_INLINE_RQ_CFG, 0x8024, nix_lf_inline_rq_cfg, \
+ nix_rq_cpt_field_mask_cfg_req, \
+ msg_rsp) \
M(NIX_MCAST_GRP_CREATE, 0x802b, nix_mcast_grp_create, nix_mcast_grp_create_req, \
nix_mcast_grp_create_rsp) \
M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req, \
@@ -857,6 +860,7 @@ enum nix_af_status {
NIX_AF_ERR_CQ_CTX_WRITE_ERR = -429,
NIX_AF_ERR_AQ_CTX_RETRY_WRITE = -430,
NIX_AF_ERR_LINK_CREDITS = -431,
+ NIX_AF_ERR_RQ_CPT_MASK = -432,
NIX_AF_ERR_INVALID_BPID = -434,
NIX_AF_ERR_INVALID_BPID_REQ = -435,
NIX_AF_ERR_INVALID_MCAST_GRP = -436,
@@ -1178,6 +1182,25 @@ struct nix_mark_format_cfg_rsp {
u8 mark_format_idx;
};
+struct nix_rq_cpt_field_mask_cfg_req {
+ struct mbox_msghdr hdr;
+#define RQ_CTX_MASK_MAX 6
+ union {
+ u64 rq_ctx_word_set[RQ_CTX_MASK_MAX];
+ struct nix_cn10k_rq_ctx_s rq_set;
+ };
+ union {
+ u64 rq_ctx_word_mask[RQ_CTX_MASK_MAX];
+ struct nix_cn10k_rq_ctx_s rq_mask;
+ };
+ struct nix_lf_rx_ipec_cfg1_req {
+ u32 spb_cpt_aura;
+ u8 rq_mask_enable;
+ u8 spb_cpt_sizem1;
+ u8 spb_cpt_enable;
+ } ipsec_cfg1;
+};
+
struct nix_rx_mode {
struct mbox_msghdr hdr;
#define NIX_RX_MODE_UCAST BIT(0)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 6551fdb612dc..71407f6318ec 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -350,6 +350,11 @@ struct nix_lso {
u8 in_use;
};
+struct nix_rq_cpt_mask {
+ u8 total;
+ u8 in_use;
+};
+
struct nix_txvlan {
#define NIX_TX_VTAG_DEF_MAX 0x400
struct rsrc_bmap rsrc;
@@ -373,6 +378,7 @@ struct nix_hw {
struct nix_flowkey flowkey;
struct nix_mark_format mark_format;
struct nix_lso lso;
+ struct nix_rq_cpt_mask rq_msk;
struct nix_txvlan txvlan;
struct nix_ipolicer *ipolicer;
struct nix_bp bp;
@@ -398,6 +404,7 @@ struct hw_cap {
bool per_pf_mbox_regs; /* PF mbox specified in per PF registers ? */
bool programmable_chans; /* Channels programmable ? */
bool ipolicer;
+ bool second_cpt_pass;
bool nix_multiple_dwrr_mtu; /* Multiple DWRR_MTU to choose from */
bool npc_hash_extract; /* Hash extract enabled ? */
bool npc_exact_match_enabled; /* Exact match supported ? */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
index 7fa98aeb3663..18e2a48e2de1 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
@@ -544,6 +544,7 @@ void rvu_program_channels(struct rvu *rvu)
void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw)
{
+ struct rvu_hwinfo *hw = rvu->hw;
int blkaddr = nix_hw->blkaddr;
u64 cfg;
@@ -558,6 +559,16 @@ void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw)
cfg = rvu_read64(rvu, blkaddr, NIX_AF_CFG);
cfg |= BIT_ULL(1) | BIT_ULL(2);
rvu_write64(rvu, blkaddr, NIX_AF_CFG, cfg);
+
+ cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
+
+ if (!(cfg & BIT_ULL(62))) {
+ hw->cap.second_cpt_pass = false;
+ return;
+ }
+
+ hw->cap.second_cpt_pass = true;
+ nix_hw->rq_msk.total = NIX_RQ_MSK_PROFILES;
}
void rvu_apr_block_cn10k_init(struct rvu *rvu)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 6bd995c45dad..b15fd331facf 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -6612,3 +6612,123 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
return ret;
}
+
+static inline void
+configure_rq_mask(struct rvu *rvu, int blkaddr, int nixlf,
+ u8 rq_mask, bool enable)
+{
+ u64 cfg, reg;
+
+ cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf));
+ reg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_CFG(nixlf));
+ if (enable) {
+ cfg |= BIT_ULL(43);
+ reg = (reg & ~GENMASK_ULL(36, 35)) | ((u64)rq_mask << 35);
+ } else {
+ cfg &= ~BIT_ULL(43);
+ reg = (reg & ~GENMASK_ULL(36, 35));
+ }
+ rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf), cfg);
+ rvu_write64(rvu, blkaddr, NIX_AF_LFX_CFG(nixlf), reg);
+}
+
+static inline void
+configure_spb_cpt(struct rvu *rvu, int blkaddr, int nixlf,
+ struct nix_rq_cpt_field_mask_cfg_req *req, bool enable)
+{
+ u64 cfg;
+
+ cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf));
+ if (enable) {
+ cfg |= BIT_ULL(37);
+ cfg &= ~GENMASK_ULL(42, 38);
+ cfg |= ((u64)req->ipsec_cfg1.spb_cpt_sizem1 << 38);
+ cfg &= ~GENMASK_ULL(63, 44);
+ cfg |= ((u64)req->ipsec_cfg1.spb_cpt_aura << 44);
+ } else {
+ cfg &= ~BIT_ULL(37);
+ cfg &= ~GENMASK_ULL(42, 38);
+ cfg &= ~GENMASK_ULL(63, 44);
+ }
+ rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf), cfg);
+}
+
+static
+int nix_inline_rq_mask_alloc(struct rvu *rvu,
+ struct nix_rq_cpt_field_mask_cfg_req *req,
+ struct nix_hw *nix_hw, int blkaddr)
+{
+ u8 rq_cpt_mask_select;
+ int idx, rq_idx;
+ u64 reg_mask;
+ u64 reg_set;
+
+ for (idx = 0; idx < nix_hw->rq_msk.in_use; idx++) {
+ for (rq_idx = 0; rq_idx < RQ_CTX_MASK_MAX; rq_idx++) {
+ reg_mask = rvu_read64(rvu, blkaddr,
+ NIX_AF_RX_RQX_MASKX(idx, rq_idx));
+ reg_set = rvu_read64(rvu, blkaddr,
+ NIX_AF_RX_RQX_SETX(idx, rq_idx));
+ if (reg_mask != req->rq_ctx_word_mask[rq_idx] &&
+ reg_set != req->rq_ctx_word_set[rq_idx])
+ break;
+ }
+ if (rq_idx == RQ_CTX_MASK_MAX)
+ break;
+ }
+
+ if (idx < nix_hw->rq_msk.in_use) {
+ /* Match found */
+ rq_cpt_mask_select = idx;
+ return idx;
+ }
+
+ if (nix_hw->rq_msk.in_use == nix_hw->rq_msk.total)
+ return NIX_AF_ERR_RQ_CPT_MASK;
+
+ rq_cpt_mask_select = nix_hw->rq_msk.in_use++;
+
+ for (rq_idx = 0; rq_idx < RQ_CTX_MASK_MAX; rq_idx++) {
+ rvu_write64(rvu, blkaddr,
+ NIX_AF_RX_RQX_MASKX(rq_cpt_mask_select, rq_idx),
+ req->rq_ctx_word_mask[rq_idx]);
+ rvu_write64(rvu, blkaddr,
+ NIX_AF_RX_RQX_SETX(rq_cpt_mask_select, rq_idx),
+ req->rq_ctx_word_set[rq_idx]);
+ }
+
+ return rq_cpt_mask_select;
+}
+
+int rvu_mbox_handler_nix_lf_inline_rq_cfg(struct rvu *rvu,
+ struct nix_rq_cpt_field_mask_cfg_req *req,
+ struct msg_rsp *rsp)
+{
+ struct rvu_hwinfo *hw = rvu->hw;
+ struct nix_hw *nix_hw;
+ int blkaddr, nixlf;
+ int rq_mask, err;
+
+ err = nix_get_nixlf(rvu, req->hdr.pcifunc, &nixlf, &blkaddr);
+ if (err)
+ return err;
+
+ nix_hw = get_nix_hw(rvu->hw, blkaddr);
+ if (!nix_hw)
+ return NIX_AF_ERR_INVALID_NIXBLK;
+
+ if (!hw->cap.second_cpt_pass)
+ return NIX_AF_ERR_INVALID_NIXBLK;
+
+ if (req->ipsec_cfg1.rq_mask_enable) {
+ rq_mask = nix_inline_rq_mask_alloc(rvu, req, nix_hw, blkaddr);
+ if (rq_mask < 0)
+ return NIX_AF_ERR_RQ_CPT_MASK;
+ }
+
+ configure_rq_mask(rvu, blkaddr, nixlf, rq_mask,
+ req->ipsec_cfg1.rq_mask_enable);
+ configure_spb_cpt(rvu, blkaddr, nixlf, req,
+ req->ipsec_cfg1.spb_cpt_enable);
+ return 0;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index 245e69fcbff9..e5e005d5d71e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -433,6 +433,8 @@
#define NIX_AF_MDQX_IN_MD_COUNT(a) (0x14e0 | (a) << 16)
#define NIX_AF_SMQX_STATUS(a) (0x730 | (a) << 16)
#define NIX_AF_MDQX_OUT_MD_COUNT(a) (0xdb0 | (a) << 16)
+#define NIX_AF_RX_RQX_MASKX(a, b) (0x4A40 | (a) << 16 | (b) << 3)
+#define NIX_AF_RX_RQX_SETX(a, b) (0x4A80 | (a) << 16 | (b) << 3)
#define NIX_PRIV_AF_INT_CFG (0x8000000)
#define NIX_PRIV_LFX_CFG (0x8000010)
@@ -452,6 +454,10 @@
#define NIX_AF_TL3_PARENT_MASK GENMASK_ULL(23, 16)
#define NIX_AF_TL2_PARENT_MASK GENMASK_ULL(20, 16)
+#define NIX_AF_LF_CFG_SHIFT 17
+#define NIX_AF_LF_SSO_PF_FUNC_SHIFT 16
+#define NIX_RQ_MSK_PROFILES 4
+
/* SSO */
#define SSO_AF_CONST (0x1000)
#define SSO_AF_CONST1 (0x1008)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
index 77ac94cb2ec4..bd37ed3a81ad 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
@@ -377,7 +377,9 @@ struct nix_cn10k_rq_ctx_s {
u64 ipsech_ena : 1;
u64 ena_wqwd : 1;
u64 cq : 20;
- u64 rsvd_36_24 : 13;
+ u64 rsvd_34_24 : 11;
+ u64 port_ol4_dis : 1;
+ u64 port_il4_dis : 1;
u64 lenerr_dis : 1;
u64 csum_il4_dis : 1;
u64 csum_ol4_dis : 1;
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (5 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-03 16:12 ` Kalesh Anakkur Purayil
2025-05-07 12:45 ` Simon Horman
2025-05-02 13:19 ` [net-next PATCH v1 08/15] octeontx2-af: Add mbox to alloc/free BPIDs Tanmay Jagdale
` (8 subsequent siblings)
15 siblings, 2 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Kiran Kumar K, Nithin Dabilpuram, Tanmay Jagdale
From: Kiran Kumar K <kirankumark@marvell.com>
In case of IPsec, the inbound SPI can be random. HW supports mapping
SPI to an arbitrary SA index. SPI to SA index is done using a lookup
in NPC cam entry with key as SPI, MATCH_ID, LFID. Adding Mbox API
changes to configure the match table.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../ethernet/marvell/octeontx2/af/Makefile | 2 +-
.../net/ethernet/marvell/octeontx2/af/mbox.h | 27 +++
.../net/ethernet/marvell/octeontx2/af/rvu.c | 4 +
.../net/ethernet/marvell/octeontx2/af/rvu.h | 13 ++
.../ethernet/marvell/octeontx2/af/rvu_nix.c | 6 +
.../marvell/octeontx2/af/rvu_nix_spi.c | 220 ++++++++++++++++++
.../ethernet/marvell/octeontx2/af/rvu_reg.h | 4 +
7 files changed, 275 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
index ccea37847df8..49318017f35f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile
+++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_OCTEONTX2_MBOX) += rvu_mbox.o
obj-$(CONFIG_OCTEONTX2_AF) += rvu_af.o
rvu_mbox-y := mbox.o rvu_trace.o
-rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \
+rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o rvu_nix_spi.o \
rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \
rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o rvu_switch.o \
rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o \
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 715efcc04c9e..5cebf10a15a7 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -326,6 +326,10 @@ M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg, \
M(NIX_LF_INLINE_RQ_CFG, 0x8024, nix_lf_inline_rq_cfg, \
nix_rq_cpt_field_mask_cfg_req, \
msg_rsp) \
+M(NIX_SPI_TO_SA_ADD, 0x8026, nix_spi_to_sa_add, nix_spi_to_sa_add_req, \
+ nix_spi_to_sa_add_rsp) \
+M(NIX_SPI_TO_SA_DELETE, 0x8027, nix_spi_to_sa_delete, nix_spi_to_sa_delete_req, \
+ msg_rsp) \
M(NIX_MCAST_GRP_CREATE, 0x802b, nix_mcast_grp_create, nix_mcast_grp_create_req, \
nix_mcast_grp_create_rsp) \
M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req, \
@@ -880,6 +884,29 @@ enum nix_rx_vtag0_type {
NIX_AF_LFX_RX_VTAG_TYPE7,
};
+/* For SPI to SA index add */
+struct nix_spi_to_sa_add_req {
+ struct mbox_msghdr hdr;
+ u32 sa_index;
+ u32 spi_index;
+ u16 match_id;
+ bool valid;
+};
+
+struct nix_spi_to_sa_add_rsp {
+ struct mbox_msghdr hdr;
+ u16 hash_index;
+ u8 way;
+ u8 is_duplicate;
+};
+
+/* To free SPI to SA index */
+struct nix_spi_to_sa_delete_req {
+ struct mbox_msghdr hdr;
+ u16 hash_index;
+ u8 way;
+};
+
/* For NIX LF context alloc and init */
struct nix_lf_alloc_req {
struct mbox_msghdr hdr;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index ea346e59835b..2b7c09bb24e1 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -90,6 +90,9 @@ static void rvu_setup_hw_capabilities(struct rvu *rvu)
if (is_rvu_npc_hash_extract_en(rvu))
hw->cap.npc_hash_extract = true;
+
+ if (is_rvu_nix_spi_to_sa_en(rvu))
+ hw->cap.spi_to_sas = 0x2000;
}
/* Poll a RVU block's register 'offset', for a 'zero'
@@ -2723,6 +2726,7 @@ static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc)
rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NPA);
rvu_reset_lmt_map_tbl(rvu, pcifunc);
rvu_detach_rsrcs(rvu, NULL, pcifunc);
+
/* In scenarios where PF/VF drivers detach NIXLF without freeing MCAM
* entries, check and free the MCAM entries explicitly to avoid leak.
* Since LF is detached use LF number as -1.
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 71407f6318ec..42fc3e762bc0 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -395,6 +395,7 @@ struct hw_cap {
u16 nix_txsch_per_cgx_lmac; /* Max Q's transmitting to CGX LMAC */
u16 nix_txsch_per_lbk_lmac; /* Max Q's transmitting to LBK LMAC */
u16 nix_txsch_per_sdp_lmac; /* Max Q's transmitting to SDP LMAC */
+ u16 spi_to_sas; /* Num of SPI to SA index */
bool nix_fixed_txschq_mapping; /* Schq mapping fixed or flexible */
bool nix_shaping; /* Is shaping and coloring supported */
bool nix_shaper_toggle_wait; /* Shaping toggle needs poll/wait */
@@ -800,6 +801,17 @@ static inline bool is_rvu_npc_hash_extract_en(struct rvu *rvu)
return true;
}
+static inline bool is_rvu_nix_spi_to_sa_en(struct rvu *rvu)
+{
+ u64 nix_const2;
+
+ nix_const2 = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST2);
+ if ((nix_const2 >> 48) & 0xffff)
+ return true;
+
+ return false;
+}
+
static inline u16 rvu_nix_chan_cgx(struct rvu *rvu, u8 cgxid,
u8 lmacid, u8 chan)
{
@@ -992,6 +1004,7 @@ int nix_get_struct_ptrs(struct rvu *rvu, u16 pcifunc,
struct nix_hw **nix_hw, int *blkaddr);
int rvu_nix_setup_ratelimit_aggr(struct rvu *rvu, u16 pcifunc,
u16 rq_idx, u16 match_id);
+int rvu_nix_free_spi_to_sa_table(struct rvu *rvu, uint16_t pcifunc);
int nix_aq_context_read(struct rvu *rvu, struct nix_hw *nix_hw,
struct nix_cn10k_aq_enq_req *aq_req,
struct nix_cn10k_aq_enq_rsp *aq_rsp,
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index b15fd331facf..68525bfc8e6d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -1751,6 +1751,9 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req,
else
rvu_npc_free_mcam_entries(rvu, pcifunc, nixlf);
+ /* Reset SPI to SA index table */
+ rvu_nix_free_spi_to_sa_table(rvu, pcifunc);
+
/* Free any tx vtag def entries used by this NIX LF */
if (!(req->flags & NIX_LF_DONT_FREE_TX_VTAG))
nix_free_tx_vtag_entries(rvu, pcifunc);
@@ -5312,6 +5315,9 @@ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf)
nix_rx_sync(rvu, blkaddr);
nix_txschq_free(rvu, pcifunc);
+ /* Reset SPI to SA index table */
+ rvu_nix_free_spi_to_sa_table(rvu, pcifunc);
+
clear_bit(NIXLF_INITIALIZED, &pfvf->flags);
if (is_pf_cgxmapped(rvu, pf) && rvu->rep_mode)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
new file mode 100644
index 000000000000..b8acc23a47bc
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
@@ -0,0 +1,220 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell RVU Admin Function driver
+ *
+ * Copyright (C) 2022 Marvell.
+ *
+ */
+
+#include "rvu.h"
+
+static bool nix_spi_to_sa_index_check_duplicate(struct rvu *rvu,
+ struct nix_spi_to_sa_add_req *req,
+ struct nix_spi_to_sa_add_rsp *rsp,
+ int blkaddr, int16_t index, u8 way,
+ bool *is_valid, int lfidx)
+{
+ u32 spi_index;
+ u16 match_id;
+ bool valid;
+ u8 lfid;
+ u64 wkey;
+
+ wkey = rvu_read64(rvu, blkaddr, NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way));
+ spi_index = (wkey & 0xFFFFFFFF);
+ match_id = ((wkey >> 32) & 0xFFFF);
+ lfid = ((wkey >> 48) & 0x7f);
+ valid = ((wkey >> 55) & 0x1);
+
+ *is_valid = valid;
+ if (!valid)
+ return 0;
+
+ if (req->spi_index == spi_index && req->match_id == match_id &&
+ lfidx == lfid) {
+ rsp->hash_index = index;
+ rsp->way = way;
+ rsp->is_duplicate = true;
+ return 1;
+ }
+ return 0;
+}
+
+static void nix_spi_to_sa_index_table_update(struct rvu *rvu,
+ struct nix_spi_to_sa_add_req *req,
+ struct nix_spi_to_sa_add_rsp *rsp,
+ int blkaddr, int16_t index, u8 way,
+ int lfidx)
+{
+ u64 wvalue;
+ u64 wkey;
+
+ wkey = (req->spi_index | ((u64)req->match_id << 32) |
+ (((u64)lfidx) << 48) | ((u64)req->valid << 55));
+ rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way),
+ wkey);
+ wvalue = (req->sa_index & 0xFFFFFFFF);
+ rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_VALUEX_WAYX(index, way),
+ wvalue);
+ rsp->hash_index = index;
+ rsp->way = way;
+ rsp->is_duplicate = false;
+}
+
+int rvu_mbox_handler_nix_spi_to_sa_delete(struct rvu *rvu,
+ struct nix_spi_to_sa_delete_req *req,
+ struct msg_rsp *rsp)
+{
+ struct rvu_hwinfo *hw = rvu->hw;
+ u16 pcifunc = req->hdr.pcifunc;
+ int lfidx, lfid;
+ int blkaddr;
+ u64 wvalue;
+ u64 wkey;
+ int ret = 0;
+
+ if (!hw->cap.spi_to_sas)
+ return NIX_AF_ERR_PARAM;
+
+ if (!is_nixlf_attached(rvu, pcifunc)) {
+ ret = NIX_AF_ERR_AF_LF_INVALID;
+ goto exit;
+ }
+
+ blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+ lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
+ if (lfidx < 0) {
+ ret = NIX_AF_ERR_AF_LF_INVALID;
+ goto exit;
+ }
+
+ mutex_lock(&rvu->rsrc_lock);
+
+ wkey = rvu_read64(rvu, blkaddr,
+ NIX_AF_SPI_TO_SA_KEYX_WAYX(req->hash_index, req->way));
+ lfid = ((wkey >> 48) & 0x7f);
+ if (lfid != lfidx) {
+ ret = NIX_AF_ERR_AF_LF_INVALID;
+ goto unlock;
+ }
+
+ wkey = 0;
+ rvu_write64(rvu, blkaddr,
+ NIX_AF_SPI_TO_SA_KEYX_WAYX(req->hash_index, req->way), wkey);
+ wvalue = 0;
+ rvu_write64(rvu, blkaddr,
+ NIX_AF_SPI_TO_SA_VALUEX_WAYX(req->hash_index, req->way), wvalue);
+unlock:
+ mutex_unlock(&rvu->rsrc_lock);
+exit:
+ return ret;
+}
+
+int rvu_mbox_handler_nix_spi_to_sa_add(struct rvu *rvu,
+ struct nix_spi_to_sa_add_req *req,
+ struct nix_spi_to_sa_add_rsp *rsp)
+{
+ u16 way0_index, way1_index, way2_index, way3_index;
+ struct rvu_hwinfo *hw = rvu->hw;
+ u16 pcifunc = req->hdr.pcifunc;
+ bool way0, way1, way2, way3;
+ int ret = 0;
+ int blkaddr;
+ int lfidx;
+ u64 value;
+ u64 key;
+
+ if (!hw->cap.spi_to_sas)
+ return NIX_AF_ERR_PARAM;
+
+ if (!is_nixlf_attached(rvu, pcifunc)) {
+ ret = NIX_AF_ERR_AF_LF_INVALID;
+ goto exit;
+ }
+
+ blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+ lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
+ if (lfidx < 0) {
+ ret = NIX_AF_ERR_AF_LF_INVALID;
+ goto exit;
+ }
+
+ mutex_lock(&rvu->rsrc_lock);
+
+ key = (((u64)lfidx << 48) | ((u64)req->match_id << 32) | req->spi_index);
+ rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_HASH_KEY, key);
+ value = rvu_read64(rvu, blkaddr, NIX_AF_SPI_TO_SA_HASH_VALUE);
+ way0_index = (value & 0x7ff);
+ way1_index = ((value >> 16) & 0x7ff);
+ way2_index = ((value >> 32) & 0x7ff);
+ way3_index = ((value >> 48) & 0x7ff);
+
+ /* Check for duplicate entry */
+ if (nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
+ way0_index, 0, &way0, lfidx) ||
+ nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
+ way1_index, 1, &way1, lfidx) ||
+ nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
+ way2_index, 2, &way2, lfidx) ||
+ nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
+ way3_index, 3, &way3, lfidx)) {
+ ret = 0;
+ goto unlock;
+ }
+
+ /* If not present, update first available way with index */
+ if (!way0)
+ nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
+ way0_index, 0, lfidx);
+ else if (!way1)
+ nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
+ way1_index, 1, lfidx);
+ else if (!way2)
+ nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
+ way2_index, 2, lfidx);
+ else if (!way3)
+ nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
+ way3_index, 3, lfidx);
+unlock:
+ mutex_unlock(&rvu->rsrc_lock);
+exit:
+ return ret;
+}
+
+int rvu_nix_free_spi_to_sa_table(struct rvu *rvu, uint16_t pcifunc)
+{
+ struct rvu_hwinfo *hw = rvu->hw;
+ int lfidx, lfid;
+ int index, way;
+ u64 value, key;
+ int blkaddr;
+
+ if (!hw->cap.spi_to_sas)
+ return 0;
+
+ blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+ lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
+ if (lfidx < 0)
+ return NIX_AF_ERR_AF_LF_INVALID;
+
+ mutex_lock(&rvu->rsrc_lock);
+ for (index = 0; index < hw->cap.spi_to_sas / 4; index++) {
+ for (way = 0; way < 4; way++) {
+ key = rvu_read64(rvu, blkaddr,
+ NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way));
+ lfid = ((key >> 48) & 0x7f);
+ if (lfid == lfidx) {
+ key = 0;
+ rvu_write64(rvu, blkaddr,
+ NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way),
+ key);
+ value = 0;
+ rvu_write64(rvu, blkaddr,
+ NIX_AF_SPI_TO_SA_VALUEX_WAYX(index, way),
+ value);
+ }
+ }
+ }
+ mutex_unlock(&rvu->rsrc_lock);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index e5e005d5d71e..b64547fe4811 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -396,6 +396,10 @@
#define NIX_AF_RX_CHANX_CFG(a) (0x1A30 | (a) << 15)
#define NIX_AF_CINT_TIMERX(a) (0x1A40 | (a) << 18)
#define NIX_AF_LSO_FORMATX_FIELDX(a, b) (0x1B00 | (a) << 16 | (b) << 3)
+#define NIX_AF_SPI_TO_SA_KEYX_WAYX(a, b) (0x1C00 | (a) << 16 | (b) << 3)
+#define NIX_AF_SPI_TO_SA_VALUEX_WAYX(a, b) (0x1C40 | (a) << 16 | (b) << 3)
+#define NIX_AF_SPI_TO_SA_HASH_KEY (0x1C90)
+#define NIX_AF_SPI_TO_SA_HASH_VALUE (0x1CA0)
#define NIX_AF_LFX_CFG(a) (0x4000 | (a) << 17)
#define NIX_AF_LFX_SQS_CFG(a) (0x4020 | (a) << 17)
#define NIX_AF_LFX_TX_CFG2(a) (0x4028 | (a) << 17)
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 08/15] octeontx2-af: Add mbox to alloc/free BPIDs
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (6 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 09/15] octeontx2-pf: ipsec: Allocate Ingress SA table Tanmay Jagdale
` (7 subsequent siblings)
15 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Amit Singh Tomar, Tanmay Jagdale
From: Geetha sowjanya <gakula@marvell.com>
Adds mbox handlers to allocate/free BPIDs from the free BPIDs pool.
This can be used by the PF/VF to request up to 8 BPIds.
Also add a mbox handler to configure NIXX_AF_RX_CHANX with multiple
Bpids.
Signed-off-by: Amit Singh Tomar <amitsinght@marvell.com>
Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../ethernet/marvell/octeontx2/af/common.h | 1 +
.../net/ethernet/marvell/octeontx2/af/mbox.h | 26 +++++
.../ethernet/marvell/octeontx2/af/rvu_nix.c | 100 ++++++++++++++++++
3 files changed, 127 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
index 406c59100a35..a7c1223dedc6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
@@ -191,6 +191,7 @@ enum nix_scheduler {
#define NIX_INTF_TYPE_CGX 0
#define NIX_INTF_TYPE_LBK 1
#define NIX_INTF_TYPE_SDP 2
+#define NIX_INTF_TYPE_CPT 3
#define MAX_LMAC_PKIND 12
#define NIX_LINK_CGX_LMAC(a, b) (0 + 4 * (a) + (b))
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 5cebf10a15a7..71cf507c2591 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -338,6 +338,9 @@ M(NIX_MCAST_GRP_UPDATE, 0x802d, nix_mcast_grp_update, \
nix_mcast_grp_update_req, \
nix_mcast_grp_update_rsp) \
M(NIX_LF_STATS, 0x802e, nix_lf_stats, nix_stats_req, nix_stats_rsp) \
+M(NIX_ALLOC_BPIDS, 0x8028, nix_alloc_bpids, nix_alloc_bpid_req, nix_bpids) \
+M(NIX_FREE_BPIDS, 0x8029, nix_free_bpids, nix_bpids, msg_rsp) \
+M(NIX_RX_CHAN_CFG, 0x802a, nix_rx_chan_cfg, nix_rx_chan_cfg, nix_rx_chan_cfg) \
/* MCS mbox IDs (range 0xA000 - 0xBFFF) */ \
M(MCS_ALLOC_RESOURCES, 0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req, \
mcs_alloc_rsrc_rsp) \
@@ -1347,6 +1350,29 @@ struct nix_mcast_grp_update_rsp {
u32 mce_start_index;
};
+struct nix_alloc_bpid_req {
+ struct mbox_msghdr hdr;
+ u8 bpid_cnt;
+ u8 type;
+ u64 rsvd;
+};
+
+struct nix_bpids {
+ struct mbox_msghdr hdr;
+ u8 bpid_cnt;
+ u16 bpids[8];
+ u64 rsvd;
+};
+
+struct nix_rx_chan_cfg {
+ struct mbox_msghdr hdr;
+ u8 type; /* Interface type(CGX/CPT/LBK) */
+ u8 read;
+ u16 chan; /* RX channel to be configured */
+ u64 val; /* NIX_AF_RX_CHAN_CFG value */
+ u64 rsvd;
+};
+
/* Global NIX inline IPSec configuration */
struct nix_inline_ipsec_cfg {
struct mbox_msghdr hdr;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 68525bfc8e6d..d5ec6ad0f30c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -569,6 +569,106 @@ void rvu_nix_flr_free_bpids(struct rvu *rvu, u16 pcifunc)
mutex_unlock(&rvu->rsrc_lock);
}
+int rvu_mbox_handler_nix_rx_chan_cfg(struct rvu *rvu,
+ struct nix_rx_chan_cfg *req,
+ struct nix_rx_chan_cfg *rsp)
+{
+ struct rvu_pfvf *pfvf;
+ int blkaddr;
+ u16 chan;
+
+ pfvf = rvu_get_pfvf(rvu, req->hdr.pcifunc);
+ blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, req->hdr.pcifunc);
+ chan = pfvf->rx_chan_base + req->chan;
+
+ if (req->type == NIX_INTF_TYPE_CPT)
+ chan = chan | BIT(11);
+
+ if (req->read) {
+ rsp->val = rvu_read64(rvu, blkaddr,
+ NIX_AF_RX_CHANX_CFG(chan));
+ rsp->chan = req->chan;
+ } else {
+ rvu_write64(rvu, blkaddr, NIX_AF_RX_CHANX_CFG(chan), req->val);
+ }
+ return 0;
+}
+
+int rvu_mbox_handler_nix_alloc_bpids(struct rvu *rvu,
+ struct nix_alloc_bpid_req *req,
+ struct nix_bpids *rsp)
+{
+ u16 pcifunc = req->hdr.pcifunc;
+ struct nix_hw *nix_hw;
+ int blkaddr, cnt = 0;
+ struct nix_bp *bp;
+ int bpid, err;
+
+ err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr);
+ if (err)
+ return err;
+
+ bp = &nix_hw->bp;
+
+ /* For interface like sso uses same bpid across multiple
+ * application. Find the bpid is it already allocate or
+ * allocate a new one.
+ */
+ if (req->type > NIX_INTF_TYPE_CPT) {
+ for (bpid = 0; bpid < bp->bpids.max; bpid++) {
+ if (bp->intf_map[bpid] == req->type) {
+ rsp->bpids[cnt] = bpid + bp->free_pool_base;
+ rsp->bpid_cnt++;
+ bp->ref_cnt[bpid]++;
+ cnt++;
+ }
+ }
+ if (rsp->bpid_cnt)
+ return 0;
+ }
+
+ for (cnt = 0; cnt < req->bpid_cnt; cnt++) {
+ bpid = rvu_alloc_rsrc(&bp->bpids);
+ if (bpid < 0)
+ return 0;
+ rsp->bpids[cnt] = bpid + bp->free_pool_base;
+ bp->intf_map[bpid] = req->type;
+ bp->fn_map[bpid] = pcifunc;
+ bp->ref_cnt[bpid]++;
+ rsp->bpid_cnt++;
+ }
+ return 0;
+}
+
+int rvu_mbox_handler_nix_free_bpids(struct rvu *rvu,
+ struct nix_bpids *req,
+ struct msg_rsp *rsp)
+{
+ u16 pcifunc = req->hdr.pcifunc;
+ int blkaddr, cnt, err, id;
+ struct nix_hw *nix_hw;
+ struct nix_bp *bp;
+ u16 bpid;
+
+ err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr);
+ if (err)
+ return err;
+
+ bp = &nix_hw->bp;
+ for (cnt = 0; cnt < req->bpid_cnt; cnt++) {
+ bpid = req->bpids[cnt] - bp->free_pool_base;
+ bp->ref_cnt[bpid]--;
+ if (bp->ref_cnt[bpid])
+ continue;
+ rvu_free_rsrc(&bp->bpids, bpid);
+ for (id = 0; id < bp->bpids.max; id++) {
+ if (bp->fn_map[id] == pcifunc)
+ bp->fn_map[id] = 0;
+ }
+ }
+ return 0;
+}
+
static u16 nix_get_channel(u16 chan, bool cpt_link)
{
/* CPT channel for a given link channel is always
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 09/15] octeontx2-pf: ipsec: Allocate Ingress SA table
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (7 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 08/15] octeontx2-af: Add mbox to alloc/free BPIDs Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-07 12:56 ` Simon Horman
2025-05-02 13:19 ` [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows Tanmay Jagdale
` (6 subsequent siblings)
15 siblings, 1 reply; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
Every NIX LF has the facility to maintain a contiguous SA table that
is used by NIX RX to find the exact SA context pointer associated with
a particular flow. Allocate a 128-entry SA table where each entry is of
2048 bytes which is enough to hold the complete inbound SA context.
Add the structure definitions for SA context (cn10k_rx_sa_s) and
SA bookkeeping information (ctx_inb_ctx_info).
Also, initialize the inb_sw_ctx_list to track all the SA's and their
associated NPC rules and hash table related data.
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../marvell/octeontx2/nic/cn10k_ipsec.c | 20 ++++
.../marvell/octeontx2/nic/cn10k_ipsec.h | 93 +++++++++++++++++++
2 files changed, 113 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index fc59e50bafce..c435dcae4929 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -787,6 +787,7 @@ int cn10k_ipsec_init(struct net_device *netdev)
{
struct otx2_nic *pf = netdev_priv(netdev);
u32 sa_size;
+ int err;
if (!is_dev_support_ipsec_offload(pf->pdev))
return 0;
@@ -797,6 +798,22 @@ int cn10k_ipsec_init(struct net_device *netdev)
OTX2_ALIGN : sizeof(struct cn10k_tx_sa_s);
pf->ipsec.sa_size = sa_size;
+ /* Set sa_tbl_entry_sz to 2048 since we are programming NIX RX
+ * to calculate SA index as SPI * 2048. The first 1024 bytes
+ * are used for SA context and the next half for bookkeeping data.
+ */
+ pf->ipsec.sa_tbl_entry_sz = 2048;
+ err = qmem_alloc(pf->dev, &pf->ipsec.inb_sa, CN10K_IPSEC_INB_MAX_SA,
+ pf->ipsec.sa_tbl_entry_sz);
+ if (err)
+ return err;
+
+ memset(pf->ipsec.inb_sa->base, 0,
+ pf->ipsec.sa_tbl_entry_sz * CN10K_IPSEC_INB_MAX_SA);
+
+ /* List to track all ingress SAs */
+ INIT_LIST_HEAD(&pf->ipsec.inb_sw_ctx_list);
+
INIT_WORK(&pf->ipsec.sa_work, cn10k_ipsec_sa_wq_handler);
pf->ipsec.sa_workq = alloc_workqueue("cn10k_ipsec_sa_workq", 0, 0);
if (!pf->ipsec.sa_workq) {
@@ -828,6 +845,9 @@ void cn10k_ipsec_clean(struct otx2_nic *pf)
}
cn10k_outb_cpt_clean(pf);
+
+ /* Free Ingress SA table */
+ qmem_free(pf->dev, pf->ipsec.inb_sa);
}
EXPORT_SYMBOL(cn10k_ipsec_clean);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 9965df0faa3e..6dd6ead0b28b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -52,10 +52,14 @@ DECLARE_STATIC_KEY_FALSE(cn10k_ipsec_sa_enabled);
#define CN10K_CPT_LF_NQX(a) (CPT_LFBASE | 0x400 | (a) << 3)
#define CN10K_CPT_LF_CTX_FLUSH (CPT_LFBASE | 0x510)
+/* Inbound SA*/
+#define CN10K_IPSEC_INB_MAX_SA 128
+
/* IPSEC Instruction opcodes */
#define CN10K_IPSEC_MAJOR_OP_WRITE_SA 0x01UL
#define CN10K_IPSEC_MINOR_OP_WRITE_SA 0x09UL
#define CN10K_IPSEC_MAJOR_OP_OUTB_IPSEC 0x2AUL
+#define CN10K_IPSEC_MAJOR_OP_INB_IPSEC 0x29UL
enum cn10k_cpt_comp_e {
CN10K_CPT_COMP_E_NOTDONE = 0x00,
@@ -81,6 +85,19 @@ enum cn10k_cpt_hw_state_e {
CN10K_CPT_HW_IN_USE
};
+struct cn10k_inb_sw_ctx_info {
+ struct list_head list;
+ struct cn10k_rx_sa_s *sa_entry;
+ struct xfrm_state *x_state;
+ dma_addr_t sa_iova;
+ u32 npc_mcam_entry;
+ u32 sa_index;
+ u32 spi;
+ u16 hash_index; /* Hash index from SPI_TO_SA match */
+ u8 way; /* SPI_TO_SA match table way index */
+ bool delete_npc_and_match_entry;
+};
+
struct cn10k_ipsec {
/* Outbound CPT */
u64 io_addr;
@@ -92,6 +109,12 @@ struct cn10k_ipsec {
u32 outb_sa_count;
struct work_struct sa_work;
struct workqueue_struct *sa_workq;
+
+ /* For Inbound Inline IPSec flows */
+ u32 sa_tbl_entry_sz;
+ struct qmem *inb_sa;
+ struct list_head inb_sw_ctx_list;
+ DECLARE_BITMAP(inb_sa_table, CN10K_IPSEC_INB_MAX_SA);
};
/* CN10K IPSEC Security Association (SA) */
@@ -146,6 +169,76 @@ struct cn10k_tx_sa_s {
u64 hw_ctx[6]; /* W31 - W36 */
};
+struct cn10k_rx_sa_s {
+ u64 inb_ar_win_sz : 3; /* W0 */
+ u64 hard_life_dec : 1;
+ u64 soft_life_dec : 1;
+ u64 count_glb_octets : 1;
+ u64 count_glb_pkts : 1;
+ u64 count_mib_bytes : 1;
+ u64 count_mib_pkts : 1;
+ u64 hw_ctx_off : 7;
+ u64 ctx_id : 16;
+ u64 orig_pkt_fabs : 1;
+ u64 orig_pkt_free : 1;
+ u64 pkind : 6;
+ u64 rsvd_w0_40 : 1;
+ u64 eth_ovrwr : 1;
+ u64 pkt_output : 2;
+ u64 pkt_format : 1;
+ u64 defrag_opt : 2;
+ u64 x2p_dst : 1;
+ u64 ctx_push_size : 7;
+ u64 rsvd_w0_55 : 1;
+ u64 ctx_hdr_size : 2;
+ u64 aop_valid : 1;
+ u64 rsvd_w0_59 : 1;
+ u64 ctx_size : 4;
+
+ u64 rsvd_w1_31_0 : 32; /* W1 */
+ u64 cookie : 32;
+
+ u64 sa_valid : 1; /* W2 Control Word */
+ u64 sa_dir : 1;
+ u64 rsvd_w2_2_3 : 2;
+ u64 ipsec_mode : 1;
+ u64 ipsec_protocol : 1;
+ u64 aes_key_len : 2;
+ u64 enc_type : 3;
+ u64 life_unit : 1;
+ u64 auth_type : 4;
+ u64 encap_type : 2;
+ u64 et_ovrwr_ddr_en : 1;
+ u64 esn_en : 1;
+ u64 tport_l4_incr_csum : 1;
+ u64 iphdr_verify : 2;
+ u64 udp_ports_verify : 1;
+ u64 l2_l3_hdr_on_error : 1;
+ u64 rsvd_w25_31 : 7;
+ u64 spi : 32;
+
+ u64 w3; /* W3 */
+
+ u8 cipher_key[32]; /* W4 - W7 */
+ u32 rsvd_w8_0_31; /* W8 : IV */
+ u32 iv_gcm_salt;
+ u64 rsvd_w9; /* W9 */
+ u64 rsvd_w10; /* W10 : UDP Encap */
+ u32 dest_ipaddr; /* W11 - Tunnel mode: outer src and dest ipaddr */
+ u32 src_ipaddr;
+ u64 rsvd_w12_w30[19]; /* W12 - W30 */
+
+ u64 ar_base; /* W31 */
+ u64 ar_valid_mask; /* W32 */
+ u64 hard_sa_life; /* W33 */
+ u64 soft_sa_life; /* W34 */
+ u64 mib_octs; /* W35 */
+ u64 mib_pkts; /* W36 */
+ u64 ar_winbits; /* W37 */
+
+ u64 rsvd_w38_w100[63];
+};
+
/* CPT instruction parameter-1 */
#define CN10K_IPSEC_INST_PARAM1_DIS_L4_CSUM 0x1
#define CN10K_IPSEC_INST_PARAM1_DIS_L3_CSUM 0x2
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (8 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 09/15] octeontx2-pf: ipsec: Allocate Ingress SA table Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-07 10:03 ` kernel test robot
2025-05-07 13:46 ` Simon Horman
2025-05-02 13:19 ` [net-next PATCH v1 11/15] octeontx2-pf: ipsec: Handle NPA threshold interrupt Tanmay Jagdale
` (5 subsequent siblings)
15 siblings, 2 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
A incoming encrypted IPsec packet in the RVU NIX hardware needs
to be classified for inline fastpath processing and then assinged
a RQ and Aura pool before sending to CPT for decryption.
Create a dedicated RQ, Aura and Pool with the following setup
specifically for IPsec flows:
- Set ipsech_en, ipsecd_drop_en in RQ context to enable hardware
fastpath processing for IPsec flows.
- Configure the dedicated Aura to raise an interrupt when
it's buffer count drops below a threshold value so that the
buffers can be replenished from the CPU.
The RQ, Aura and Pool contexts are initialized only when esp-hw-offload
feature is enabled via ethtool.
Also, move some of the RQ context macro definitions to otx2_common.h
so that they can be used in the IPsec driver as well.
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../marvell/octeontx2/nic/cn10k_ipsec.c | 201 +++++++++++++++++-
.../marvell/octeontx2/nic/cn10k_ipsec.h | 2 +
.../marvell/octeontx2/nic/otx2_common.c | 23 +-
.../marvell/octeontx2/nic/otx2_common.h | 16 ++
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 4 +
5 files changed, 227 insertions(+), 19 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index c435dcae4929..b88c1b4c5839 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -346,6 +346,193 @@ static int cn10k_outb_cpt_init(struct net_device *netdev)
return ret;
}
+static int cn10k_ipsec_ingress_aura_init(struct otx2_nic *pfvf, int aura_id,
+ int pool_id, int numptrs)
+{
+ struct npa_aq_enq_req *aq;
+ struct otx2_pool *pool;
+ int err;
+
+ pool = &pfvf->qset.pool[pool_id];
+
+ /* Allocate memory for HW to update Aura count.
+ * Alloc one cache line, so that it fits all FC_STYPE modes.
+ */
+ if (!pool->fc_addr) {
+ err = qmem_alloc(pfvf->dev, &pool->fc_addr, 1, OTX2_ALIGN);
+ if (err)
+ return err;
+ }
+
+ /* Initialize this aura's context via AF */
+ aq = otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox);
+ if (!aq)
+ return -ENOMEM;
+
+ aq->aura_id = aura_id;
+ /* Will be filled by AF with correct pool context address */
+ aq->aura.pool_addr = pool_id;
+ aq->aura.pool_caching = 1;
+ aq->aura.shift = ilog2(numptrs) - 8;
+ aq->aura.count = numptrs;
+ aq->aura.limit = numptrs;
+ aq->aura.avg_level = 255;
+ aq->aura.ena = 1;
+ aq->aura.fc_ena = 1;
+ aq->aura.fc_addr = pool->fc_addr->iova;
+ aq->aura.fc_hyst_bits = 0; /* Store count on all updates */
+ aq->aura.thresh_up = 1;
+ aq->aura.thresh = aq->aura.count / 4;
+ aq->aura.thresh_qint_idx = 0;
+
+ /* Enable backpressure for RQ aura */
+ if (!is_otx2_lbkvf(pfvf->pdev)) {
+ aq->aura.bp_ena = 0;
+ /* If NIX1 LF is attached then specify NIX1_RX.
+ *
+ * Below NPA_AURA_S[BP_ENA] is set according to the
+ * NPA_BPINTF_E enumeration given as:
+ * 0x0 + a*0x1 where 'a' is 0 for NIX0_RX and 1 for NIX1_RX so
+ * NIX0_RX is 0x0 + 0*0x1 = 0
+ * NIX1_RX is 0x0 + 1*0x1 = 1
+ * But in HRM it is given that
+ * "NPA_AURA_S[BP_ENA](w1[33:32]) - Enable aura backpressure to
+ * NIX-RX based on [BP] level. One bit per NIX-RX; index
+ * enumerated by NPA_BPINTF_E."
+ */
+ if (pfvf->nix_blkaddr == BLKADDR_NIX1)
+ aq->aura.bp_ena = 1;
+#ifdef CONFIG_DCB
+ aq->aura.nix0_bpid = pfvf->bpid[pfvf->queue_to_pfc_map[aura_id]];
+#else
+ aq->aura.nix0_bpid = pfvf->bpid[0];
+#endif
+
+ /* Set backpressure level for RQ's Aura */
+ aq->aura.bp = RQ_BP_LVL_AURA;
+ }
+
+ /* Fill AQ info */
+ aq->ctype = NPA_AQ_CTYPE_AURA;
+ aq->op = NPA_AQ_INSTOP_INIT;
+
+ return otx2_sync_mbox_msg(&pfvf->mbox);
+}
+
+static int cn10k_ipsec_ingress_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura)
+{
+ struct otx2_qset *qset = &pfvf->qset;
+ struct nix_cn10k_aq_enq_req *aq;
+
+ /* Get memory to put this msg */
+ aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox);
+ if (!aq)
+ return -ENOMEM;
+
+ aq->rq.cq = qidx;
+ aq->rq.ena = 1;
+ aq->rq.pb_caching = 1;
+ aq->rq.lpb_aura = lpb_aura; /* Use large packet buffer aura */
+ aq->rq.lpb_sizem1 = (DMA_BUFFER_LEN(pfvf->rbsize) / 8) - 1;
+ aq->rq.xqe_imm_size = 0; /* Copying of packet to CQE not needed */
+ aq->rq.flow_tagw = 32; /* Copy full 32bit flow_tag to CQE header */
+ aq->rq.qint_idx = 0;
+ aq->rq.lpb_drop_ena = 1; /* Enable RED dropping for AURA */
+ aq->rq.xqe_drop_ena = 1; /* Enable RED dropping for CQ/SSO */
+ aq->rq.xqe_pass = RQ_PASS_LVL_CQ(pfvf->hw.rq_skid, qset->rqe_cnt);
+ aq->rq.xqe_drop = RQ_DROP_LVL_CQ(pfvf->hw.rq_skid, qset->rqe_cnt);
+ aq->rq.lpb_aura_pass = RQ_PASS_LVL_AURA;
+ aq->rq.lpb_aura_drop = RQ_DROP_LVL_AURA;
+ aq->rq.ipsech_ena = 1; /* IPsec HW fast path enable */
+ aq->rq.ipsecd_drop_ena = 1; /* IPsec dynamic drop enable */
+ aq->rq.xqe_drop_ena = 0;
+ aq->rq.ena_wqwd = 1; /* Store NIX headers in packet buffer */
+ aq->rq.first_skip = 16; /* Store packet after skiiping 16*8 bytes
+ * to accommodate NIX headers.
+ */
+
+ /* Fill AQ info */
+ aq->qidx = qidx;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+
+ return otx2_sync_mbox_msg(&pfvf->mbox);
+}
+
+static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf)
+{
+ struct otx2_hw *hw = &pfvf->hw;
+ int stack_pages, pool_id;
+ struct otx2_pool *pool;
+ int err, ptr, num_ptrs;
+ dma_addr_t bufptr;
+
+ num_ptrs = 256;
+ pool_id = pfvf->ipsec.inb_ipsec_pool;
+ stack_pages = (num_ptrs + hw->stack_pg_ptrs - 1) / hw->stack_pg_ptrs;
+
+ mutex_lock(&pfvf->mbox.lock);
+
+ /* Initialize aura context */
+ err = cn10k_ipsec_ingress_aura_init(pfvf, pool_id, pool_id, num_ptrs);
+ if (err)
+ goto fail;
+
+ /* Initialize pool */
+ err = otx2_pool_init(pfvf, pool_id, stack_pages, num_ptrs, pfvf->rbsize, AURA_NIX_RQ);
+ if (err)
+ goto fail;
+
+ /* Flush accumulated messages */
+ err = otx2_sync_mbox_msg(&pfvf->mbox);
+ if (err)
+ goto pool_fail;
+
+ /* Allocate pointers and free them to aura/pool */
+ pool = &pfvf->qset.pool[pool_id];
+ for (ptr = 0; ptr < num_ptrs; ptr++) {
+ err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr);
+ if (err) {
+ err = -ENOMEM;
+ goto pool_fail;
+ }
+ pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr + OTX2_HEAD_ROOM);
+ }
+
+ /* Initialize RQ and map buffers from pool_id */
+ err = cn10k_ipsec_ingress_rq_init(pfvf, pfvf->ipsec.inb_ipsec_rq, pool_id);
+ if (err)
+ goto pool_fail;
+
+ mutex_unlock(&pfvf->mbox.lock);
+ return 0;
+
+pool_fail:
+ mutex_unlock(&pfvf->mbox.lock);
+ qmem_free(pfvf->dev, pool->stack);
+ qmem_free(pfvf->dev, pool->fc_addr);
+ page_pool_destroy(pool->page_pool);
+ devm_kfree(pfvf->dev, pool->xdp);
+ pool->xsk_pool = NULL;
+fail:
+ otx2_mbox_reset(&pfvf->mbox.mbox, 0);
+ return err;
+}
+
+static int cn10k_inb_cpt_init(struct net_device *netdev)
+{
+ struct otx2_nic *pfvf = netdev_priv(netdev);
+ int ret = 0;
+
+ ret = cn10k_ipsec_setup_nix_rx_hw_resources(pfvf);
+ if (ret) {
+ netdev_err(netdev, "Failed to setup NIX HW resources for IPsec\n");
+ return ret;
+ }
+
+ return ret;
+}
+
static int cn10k_outb_cpt_clean(struct otx2_nic *pf)
{
int ret;
@@ -765,14 +952,22 @@ static void cn10k_ipsec_sa_wq_handler(struct work_struct *work)
int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable)
{
struct otx2_nic *pf = netdev_priv(netdev);
+ int ret = 0;
/* IPsec offload supported on cn10k */
if (!is_dev_support_ipsec_offload(pf->pdev))
return -EOPNOTSUPP;
- /* Initialize CPT for outbound ipsec offload */
- if (enable)
- return cn10k_outb_cpt_init(netdev);
+ /* Initialize CPT for outbound and inbound IPsec offload */
+ if (enable) {
+ ret = cn10k_outb_cpt_init(netdev);
+ if (ret)
+ return ret;
+
+ ret = cn10k_inb_cpt_init(netdev);
+ if (ret)
+ return ret;
+ }
/* Don't do CPT cleanup if SA installed */
if (pf->ipsec.outb_sa_count) {
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 6dd6ead0b28b..5b7b8f3db913 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -111,6 +111,8 @@ struct cn10k_ipsec {
struct workqueue_struct *sa_workq;
/* For Inbound Inline IPSec flows */
+ u16 inb_ipsec_rq;
+ u16 inb_ipsec_pool;
u32 sa_tbl_entry_sz;
struct qmem *inb_sa;
struct list_head inb_sw_ctx_list;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index 84cd029a85aa..c077e5ae346f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -877,22 +877,6 @@ void otx2_sqb_flush(struct otx2_nic *pfvf)
}
}
-/* RED and drop levels of CQ on packet reception.
- * For CQ level is measure of emptiness ( 0x0 = full, 255 = empty).
- */
-#define RQ_PASS_LVL_CQ(skid, qsize) ((((skid) + 16) * 256) / (qsize))
-#define RQ_DROP_LVL_CQ(skid, qsize) (((skid) * 256) / (qsize))
-
-/* RED and drop levels of AURA for packet reception.
- * For AURA level is measure of fullness (0x0 = empty, 255 = full).
- * Eg: For RQ length 1K, for pass/drop level 204/230.
- * RED accepts pkts if free pointers > 102 & <= 205.
- * Drops pkts if free pointers < 102.
- */
-#define RQ_BP_LVL_AURA (255 - ((85 * 256) / 100)) /* BP when 85% is full */
-#define RQ_PASS_LVL_AURA (255 - ((95 * 256) / 100)) /* RED when 95% is full */
-#define RQ_DROP_LVL_AURA (255 - ((99 * 256) / 100)) /* Drop when 99% is full */
-
int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura)
{
struct otx2_qset *qset = &pfvf->qset;
@@ -1242,6 +1226,13 @@ int otx2_config_nix(struct otx2_nic *pfvf)
nixlf->rss_sz = MAX_RSS_INDIR_TBL_SIZE;
nixlf->rss_grps = MAX_RSS_GROUPS;
nixlf->xqe_sz = pfvf->hw.xqe_size == 128 ? NIX_XQESZ_W16 : NIX_XQESZ_W64;
+ /* Add an additional RQ for inline inbound IPsec flows
+ * and store the RQ index for setting it up later when
+ * IPsec offload is enabled via ethtool.
+ */
+ nixlf->rq_cnt++;
+ pfvf->ipsec.inb_ipsec_rq = pfvf->hw.rx_queues;
+
/* We don't know absolute NPA LF idx attached.
* AF will replace 'RVU_DEFAULT_PF_FUNC' with
* NPA LF attached to this RVU PF/VF.
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index 7e3ddb0bee12..b5b87b5553ea 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -76,6 +76,22 @@ enum arua_mapped_qtypes {
/* Send skid of 2000 packets required for CQ size of 4K CQEs. */
#define SEND_CQ_SKID 2000
+/* RED and drop levels of CQ on packet reception.
+ * For CQ level is measure of emptiness ( 0x0 = full, 255 = empty).
+ */
+#define RQ_PASS_LVL_CQ(skid, qsize) ((((skid) + 16) * 256) / (qsize))
+#define RQ_DROP_LVL_CQ(skid, qsize) (((skid) * 256) / (qsize))
+
+/* RED and drop levels of AURA for packet reception.
+ * For AURA level is measure of fullness (0x0 = empty, 255 = full).
+ * Eg: For RQ length 1K, for pass/drop level 204/230.
+ * RED accepts pkts if free pointers > 102 & <= 205.
+ * Drops pkts if free pointers < 102.
+ */
+#define RQ_BP_LVL_AURA (255 - ((85 * 256) / 100)) /* BP when 85% is full */
+#define RQ_PASS_LVL_AURA (255 - ((95 * 256) / 100)) /* RED when 95% is full */
+#define RQ_DROP_LVL_AURA (255 - ((99 * 256) / 100)) /* Drop when 99% is full */
+
#define OTX2_GET_RX_STATS(reg) \
otx2_read64(pfvf, NIX_LF_RX_STATX(reg))
#define OTX2_GET_TX_STATS(reg) \
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 0aee8e3861f3..8f1c17fa5a0b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1538,6 +1538,10 @@ int otx2_init_hw_resources(struct otx2_nic *pf)
hw->sqpool_cnt = otx2_get_total_tx_queues(pf);
hw->pool_cnt = hw->rqpool_cnt + hw->sqpool_cnt;
+ /* Increase pool count by 1 for ingress inline IPsec */
+ pf->ipsec.inb_ipsec_pool = hw->pool_cnt;
+ hw->pool_cnt++;
+
if (!otx2_rep_dev(pf->pdev)) {
/* Maximum hardware supported transmit length */
pf->tx_max_pktlen = pf->netdev->max_mtu + OTX2_ETH_HLEN;
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 11/15] octeontx2-pf: ipsec: Handle NPA threshold interrupt
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (9 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-07 12:04 ` kernel test robot
2025-05-07 14:20 ` Simon Horman
2025-05-02 13:19 ` [net-next PATCH v1 12/15] octeontx2-pf: ipsec: Initialize ingress IPsec Tanmay Jagdale
` (4 subsequent siblings)
15 siblings, 2 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
The NPA Aura pool that is dedicated for 1st pass inline IPsec flows
raises an interrupt when the buffers of that aura_id drop below a
threshold value.
Add the following changes to handle this interrupt
- Increase the number of MSIX vectors requested for the PF/VF to
include NPA vector.
- Create a workqueue (refill_npa_inline_ipsecq) to allocate and
refill buffers to the pool.
- When the interrupt is raised, schedule the workqueue entry,
cn10k_ipsec_npa_refill_inb_ipsecq(), where the current count of
consumed buffers is determined via NPA_LF_AURA_OP_CNT and then
replenished.
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../marvell/octeontx2/nic/cn10k_ipsec.c | 102 +++++++++++++++++-
.../marvell/octeontx2/nic/cn10k_ipsec.h | 1 +
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 4 +
.../ethernet/marvell/octeontx2/nic/otx2_vf.c | 4 +
4 files changed, 110 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index b88c1b4c5839..365327ab9079 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -519,10 +519,77 @@ static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf)
return err;
}
+static void cn10k_ipsec_npa_refill_inb_ipsecq(struct work_struct *work)
+{
+ struct cn10k_ipsec *ipsec = container_of(work, struct cn10k_ipsec,
+ refill_npa_inline_ipsecq);
+ struct otx2_nic *pfvf = container_of(ipsec, struct otx2_nic, ipsec);
+ struct otx2_pool *pool = NULL;
+ struct otx2_qset *qset = NULL;
+ u64 val, *ptr, op_int = 0, count;
+ int err, pool_id, idx;
+ dma_addr_t bufptr;
+
+ qset = &pfvf->qset;
+
+ val = otx2_read64(pfvf, NPA_LF_QINTX_INT(0));
+ if (!(val & 1))
+ return;
+
+ ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_INT);
+ val = otx2_atomic64_add(((u64)pfvf->ipsec.inb_ipsec_pool << 44), ptr);
+
+ /* Error interrupt bits */
+ if (val & 0xff)
+ op_int = (val & 0xff);
+
+ /* Refill buffers on a Threshold interrupt */
+ if (val & (1 << 16)) {
+ /* Get the current number of buffers consumed */
+ ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_CNT);
+ count = otx2_atomic64_add(((u64)pfvf->ipsec.inb_ipsec_pool << 44), ptr);
+ count &= GENMASK_ULL(35, 0);
+
+ /* Refill */
+ pool_id = pfvf->ipsec.inb_ipsec_pool;
+ pool = &pfvf->qset.pool[pool_id];
+
+ for (idx = 0; idx < count; idx++) {
+ err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, idx);
+ if (err) {
+ netdev_err(pfvf->netdev,
+ "Insufficient memory for IPsec pool buffers\n");
+ break;
+ }
+ pfvf->hw_ops->aura_freeptr(pfvf, pool_id,
+ bufptr + OTX2_HEAD_ROOM);
+ }
+
+ op_int |= (1 << 16);
+ }
+
+ /* Clear/ACK Interrupt */
+ if (op_int)
+ otx2_write64(pfvf, NPA_LF_AURA_OP_INT,
+ ((u64)pfvf->ipsec.inb_ipsec_pool << 44) | op_int);
+}
+
+static irqreturn_t cn10k_ipsec_npa_inb_ipsecq_intr_handler(int irq, void *data)
+{
+ struct otx2_nic *pf = data;
+
+ schedule_work(&pf->ipsec.refill_npa_inline_ipsecq);
+
+ return IRQ_HANDLED;
+}
+
static int cn10k_inb_cpt_init(struct net_device *netdev)
{
struct otx2_nic *pfvf = netdev_priv(netdev);
- int ret = 0;
+ int ret = 0, vec;
+ char *irq_name;
+ void *ptr;
+ u64 val;
ret = cn10k_ipsec_setup_nix_rx_hw_resources(pfvf);
if (ret) {
@@ -530,6 +597,34 @@ static int cn10k_inb_cpt_init(struct net_device *netdev)
return ret;
}
+ /* Work entry for refilling the NPA queue for ingress inline IPSec */
+ INIT_WORK(&pfvf->ipsec.refill_npa_inline_ipsecq,
+ cn10k_ipsec_npa_refill_inb_ipsecq);
+
+ /* Register NPA interrupt */
+ vec = pfvf->hw.npa_msixoff;
+ irq_name = &pfvf->hw.irq_name[vec * NAME_SIZE];
+ snprintf(irq_name, NAME_SIZE, "%s-npa-qint", pfvf->netdev->name);
+
+ ret = request_irq(pci_irq_vector(pfvf->pdev, vec),
+ cn10k_ipsec_npa_inb_ipsecq_intr_handler, 0,
+ irq_name, pfvf);
+ if (ret) {
+ dev_err(pfvf->dev,
+ "RVUPF%d: IRQ registration failed for NPA QINT%d\n",
+ rvu_get_pf(pfvf->pcifunc), 0);
+ return ret;
+ }
+
+ /* Enable NPA threshold interrupt */
+ ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_INT);
+ val = BIT_ULL(43) | BIT_ULL(17);
+ otx2_write64(pfvf, NPA_LF_AURA_OP_INT,
+ ((u64)pfvf->ipsec.inb_ipsec_pool << 44) | val);
+
+ /* Enable interrupt */
+ otx2_write64(pfvf, NPA_LF_QINTX_ENA_W1S(0), BIT_ULL(0));
+
return ret;
}
@@ -1028,6 +1123,8 @@ EXPORT_SYMBOL(cn10k_ipsec_init);
void cn10k_ipsec_clean(struct otx2_nic *pf)
{
+ int vec;
+
if (!is_dev_support_ipsec_offload(pf->pdev))
return;
@@ -1043,6 +1140,9 @@ void cn10k_ipsec_clean(struct otx2_nic *pf)
/* Free Ingress SA table */
qmem_free(pf->dev, pf->ipsec.inb_sa);
+
+ vec = pci_irq_vector(pf->pdev, pf->hw.npa_msixoff);
+ free_irq(vec, pf);
}
EXPORT_SYMBOL(cn10k_ipsec_clean);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 5b7b8f3db913..30d5812d52ad 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -117,6 +117,7 @@ struct cn10k_ipsec {
struct qmem *inb_sa;
struct list_head inb_sw_ctx_list;
DECLARE_BITMAP(inb_sa_table, CN10K_IPSEC_INB_MAX_SA);
+ struct work_struct refill_npa_inline_ipsecq;
};
/* CN10K IPSEC Security Association (SA) */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 8f1c17fa5a0b..0ffc56efcc23 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -2909,6 +2909,10 @@ int otx2_realloc_msix_vectors(struct otx2_nic *pf)
num_vec = hw->nix_msixoff;
num_vec += NIX_LF_CINT_VEC_START + hw->max_queues;
+ /* Update number of vectors to include NPA */
+ if (hw->nix_msixoff < hw->npa_msixoff)
+ num_vec = hw->npa_msixoff + 1;
+
otx2_disable_mbox_intr(pf);
pci_free_irq_vectors(hw->pdev);
err = pci_alloc_irq_vectors(hw->pdev, num_vec, num_vec, PCI_IRQ_MSIX);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
index fb4da816d218..0b0f8a94ca41 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
@@ -521,6 +521,10 @@ static int otx2vf_realloc_msix_vectors(struct otx2_nic *vf)
num_vec = hw->nix_msixoff;
num_vec += NIX_LF_CINT_VEC_START + hw->max_queues;
+ /* Update number of vectors to include NPA */
+ if (hw->nix_msixoff < hw->npa_msixoff)
+ num_vec = hw->npa_msixoff + 1;
+
otx2vf_disable_mbox_intr(vf);
pci_free_irq_vectors(hw->pdev);
err = pci_alloc_irq_vectors(hw->pdev, num_vec, num_vec, PCI_IRQ_MSIX);
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 12/15] octeontx2-pf: ipsec: Initialize ingress IPsec
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (10 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 11/15] octeontx2-pf: ipsec: Handle NPA threshold interrupt Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 13/15] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries Tanmay Jagdale
` (3 subsequent siblings)
15 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
Initialize ingress inline IPsec offload when ESP offload feature
is enabled via Ethtool. As part of initialization, the following
mailboxes must be invoked to configure inline IPsec:
NIX_INLINE_IPSEC_LF_CFG - Every NIX LF has the provision to maintain a
contiguous SA Table. This mailbox configure
the SA table base address, size of each SA,
maximum number entries in the table. Currently,
we support 128 entry table with each SA of size
1024 bytes.
NIX_LF_INLINE_RQ_CFG - Post decryption, CPT sends a metapacket of 256
bytes which have enough packet headers to help
NIX RX classify it. However, since the packet is
not complete, we cannot perform checksum and
packet length verification. Hence, configure the
RQ context to disable L3, L4 checksum and length
verification for packets coming from CPT.
NIX_INLINE_IPSEC_CFG - RVU hardware supports 1 common CPT LF for inbound
ingress IPsec flows. This CPT LF is configured via
this mailbox and is a one time system-wide
configuration.
NIX_ALLOC_BPID - Configure bacpkpressure between NIX and CPT blocks
by allocating a backpressure ID using this mailbox
ingress inline IPsec flows.
NIX_FREE_BPID - Free this BPID when ESP offload is disabled via
ethtool.
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../marvell/octeontx2/nic/cn10k_ipsec.c | 167 ++++++++++++++++++
.../marvell/octeontx2/nic/cn10k_ipsec.h | 2 +
2 files changed, 169 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index 365327ab9079..c6f408007511 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -346,6 +346,97 @@ static int cn10k_outb_cpt_init(struct net_device *netdev)
return ret;
}
+static int cn10k_inb_nix_inline_lf_cfg(struct otx2_nic *pfvf)
+{
+ struct nix_inline_ipsec_lf_cfg *req;
+ int ret = 0;
+
+ mutex_lock(&pfvf->mbox.lock);
+ req = otx2_mbox_alloc_msg_nix_inline_ipsec_lf_cfg(&pfvf->mbox);
+ if (!req) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ req->sa_base_addr = pfvf->ipsec.inb_sa->iova;
+ req->ipsec_cfg0.tag_const = 0;
+ req->ipsec_cfg0.tt = 0;
+ req->ipsec_cfg0.lenm1_max = 11872; /* (Max packet size - 128 (first skip)) */
+ req->ipsec_cfg0.sa_pow2_size = 0xb; /* 2048 */
+ req->ipsec_cfg1.sa_idx_max = CN10K_IPSEC_INB_MAX_SA - 1;
+ req->ipsec_cfg1.sa_idx_w = 0x7;
+ req->enable = 1;
+
+ ret = otx2_sync_mbox_msg(&pfvf->mbox);
+error:
+ mutex_unlock(&pfvf->mbox.lock);
+ return ret;
+}
+
+static int cn10k_inb_nix_inline_lf_rq_cfg(struct otx2_nic *pfvf)
+{
+ struct nix_rq_cpt_field_mask_cfg_req *req;
+ int ret = 0, i;
+
+ mutex_lock(&pfvf->mbox.lock);
+ req = otx2_mbox_alloc_msg_nix_lf_inline_rq_cfg(&pfvf->mbox);
+ if (!req) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ for (i = 0; i < RQ_CTX_MASK_MAX; i++)
+ req->rq_ctx_word_mask[i] = 0xffffffffffffffff;
+
+ req->rq_set.len_ol3_dis = 1;
+ req->rq_set.len_ol4_dis = 1;
+ req->rq_set.len_il3_dis = 1;
+
+ req->rq_set.len_il4_dis = 1;
+ req->rq_set.csum_ol4_dis = 1;
+ req->rq_set.csum_il4_dis = 1;
+
+ req->rq_set.lenerr_dis = 1;
+ req->rq_set.port_ol4_dis = 1;
+ req->rq_set.port_il4_dis = 1;
+
+ req->ipsec_cfg1.rq_mask_enable = 1;
+ req->ipsec_cfg1.spb_cpt_enable = 0;
+
+ ret = otx2_sync_mbox_msg(&pfvf->mbox);
+error:
+ mutex_unlock(&pfvf->mbox.lock);
+ return ret;
+}
+
+static int cn10k_inb_nix_inline_ipsec_cfg(struct otx2_nic *pfvf)
+{
+ struct cpt_rx_inline_lf_cfg_msg *req;
+ int ret = 0;
+
+ mutex_lock(&pfvf->mbox.lock);
+ req = otx2_mbox_alloc_msg_cpt_rx_inline_lf_cfg(&pfvf->mbox);
+ if (!req) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ req->sso_pf_func = 0;
+ req->opcode = CN10K_IPSEC_MAJOR_OP_INB_IPSEC | (1 << 6);
+ req->param1 = 7; /* bit 0:ip_csum_dis 1:tcp_csum_dis 2:esp_trailer_dis */
+ req->param2 = 0;
+ req->bpid = pfvf->ipsec.bpid;
+ req->credit = 8160;
+ req->credit_th = 100;
+ req->ctx_ilen_valid = 1;
+ req->ctx_ilen = 5;
+
+ ret = otx2_sync_mbox_msg(&pfvf->mbox);
+error:
+ mutex_unlock(&pfvf->mbox.lock);
+ return ret;
+}
+
static int cn10k_ipsec_ingress_aura_init(struct otx2_nic *pfvf, int aura_id,
int pool_id, int numptrs)
{
@@ -625,6 +716,28 @@ static int cn10k_inb_cpt_init(struct net_device *netdev)
/* Enable interrupt */
otx2_write64(pfvf, NPA_LF_QINTX_ENA_W1S(0), BIT_ULL(0));
+ /* Enable inbound inline IPSec in NIX LF */
+ ret = cn10k_inb_nix_inline_lf_cfg(pfvf);
+ if (ret) {
+ netdev_err(netdev, "Error configuring NIX for Inline IPSec\n");
+ goto out;
+ }
+
+ /* IPsec specific RQ settings in NIX LF */
+ ret = cn10k_inb_nix_inline_lf_rq_cfg(pfvf);
+ if (ret) {
+ netdev_err(netdev, "Error configuring NIX for Inline IPSec\n");
+ goto out;
+ }
+
+ /* One-time configuration to enable CPT LF for inline inbound IPSec */
+ ret = cn10k_inb_nix_inline_ipsec_cfg(pfvf);
+ if (ret && ret != -EEXIST)
+ netdev_err(netdev, "CPT LF configuration error\n");
+ else
+ ret = 0;
+
+out:
return ret;
}
@@ -1044,6 +1157,53 @@ static void cn10k_ipsec_sa_wq_handler(struct work_struct *work)
rtnl_unlock();
}
+static int cn10k_ipsec_configure_cpt_bpid(struct otx2_nic *pfvf)
+{
+ struct nix_alloc_bpid_req *req;
+ struct nix_bpids *rsp;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_alloc_bpids(&pfvf->mbox);
+ if (!req)
+ return -ENOMEM;
+ req->bpid_cnt = 1;
+ req->type = NIX_INTF_TYPE_CPT;
+
+ rc = otx2_sync_mbox_msg(&pfvf->mbox);
+ if (rc)
+ return rc;
+
+ rsp = (struct nix_bpids *)otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+ if (IS_ERR(rsp))
+ return PTR_ERR(rsp);
+
+ /* Store the bpid for configuring it in the future */
+ pfvf->ipsec.bpid = rsp->bpids[0];
+
+ return 0;
+}
+
+static int cn10k_ipsec_free_cpt_bpid(struct otx2_nic *pfvf)
+{
+ struct nix_bpids *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_nix_free_bpids(&pfvf->mbox);
+ if (!req)
+ return -ENOMEM;
+
+ req->bpid_cnt = 1;
+ req->bpids[0] = pfvf->ipsec.bpid;
+
+ rc = otx2_sync_mbox_msg(&pfvf->mbox);
+ if (rc)
+ return rc;
+
+ /* Clear the bpid */
+ pfvf->ipsec.bpid = 0;
+ return 0;
+}
+
int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable)
{
struct otx2_nic *pf = netdev_priv(netdev);
@@ -1062,6 +1222,10 @@ int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable)
ret = cn10k_inb_cpt_init(netdev);
if (ret)
return ret;
+
+ /* Configure NIX <-> CPT backpresure */
+ ret = cn10k_ipsec_configure_cpt_bpid(pf);
+ return ret;
}
/* Don't do CPT cleanup if SA installed */
@@ -1070,6 +1234,7 @@ int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable)
return -EBUSY;
}
+ cn10k_ipsec_free_cpt_bpid(pf);
return cn10k_outb_cpt_clean(pf);
}
@@ -1143,6 +1308,8 @@ void cn10k_ipsec_clean(struct otx2_nic *pf)
vec = pci_irq_vector(pf->pdev, pf->hw.npa_msixoff);
free_irq(vec, pf);
+
+ cn10k_ipsec_free_cpt_bpid(pf);
}
EXPORT_SYMBOL(cn10k_ipsec_clean);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 30d5812d52ad..f042cbadf054 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -104,6 +104,8 @@ struct cn10k_ipsec {
atomic_t cpt_state;
struct cn10k_cpt_inst_queue iq;
+ u32 bpid; /* Backpressure ID for NIX <-> CPT */
+
/* SA info */
u32 sa_size;
u32 outb_sa_count;
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 13/15] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (11 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 12/15] octeontx2-pf: ipsec: Initialize ingress IPsec Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-07 15:58 ` Simon Horman
2025-05-02 13:19 ` [net-next PATCH v1 14/15] octeontx2-pf: ipsec: Process CPT metapackets Tanmay Jagdale
` (2 subsequent siblings)
15 siblings, 1 reply; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
NPC rule for IPsec flows
------------------------
Incoming IPsec packets are first classified for hardware fastpath
processing in the NPC block. Hence, allocate an MCAM entry in NPC
using the MCAM_ALLOC_ENTRY mailbox to add a rule for IPsec flow
classification.
Then, install an NPC rule at this entry for packet classification
based on ESP header and SPI value with match action as UCAST_IPSEC.
Also, these packets need to be directed to the dedicated receive
queue so provide the RQ index as part of NPC_INSTALL_FLOW mailbox.
Add a function to delete NPC rule as well.
SPI-to-SA match table
---------------------
NIX RX maintains a common hash table for matching the SPI value from
in ESP packet to the SA index associated with it. This table has 2K entries
with 4 ways. When a packet is received with action as UCAST_IPSEC, NIXRX
uses the SPI from the packet header to perform lookup in the SPI-to-SA
hash table. This lookup, if successful, returns an SA index that is used
by NIXRX to calculate the exact SA context address and programs it in
the CPT_INST_S before submitting the packet to CPT for decryption.
Add functions to install the delete an entry from this table via the
NIX_SPI_TO_SA_ADD and NIX_SPI_TO_SA_DELETE mailbox calls respectively.
When the RQs are changed at runtime via ethtool, RVU PF driver frees all
the resources and goes through reinitialization with the new set of receive
queues. As part of this flow, the UCAST_IPSEC NPC rules that were installed
by the RVU PF/VF driver have to be reconfigured with the new RQ index.
So, delete the NPC rules when the interface is stopped via otx2_stop().
When otx2_open() is called, re-install the NPC flow and re-initialize the
SPI-to-SA table for every SA context that was previously installed.
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../marvell/octeontx2/nic/cn10k_ipsec.c | 201 ++++++++++++++++++
.../marvell/octeontx2/nic/cn10k_ipsec.h | 7 +
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 9 +
3 files changed, 217 insertions(+)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index c6f408007511..91c8f13b6e48 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -346,6 +346,194 @@ static int cn10k_outb_cpt_init(struct net_device *netdev)
return ret;
}
+static int cn10k_inb_alloc_mcam_entry(struct otx2_nic *pfvf,
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+ struct otx2_flow_config *flow_cfg = pfvf->flow_cfg;
+ struct npc_mcam_alloc_entry_req *mcam_req;
+ struct npc_mcam_alloc_entry_rsp *mcam_rsp;
+ int err = 0;
+
+ if (!pfvf->flow_cfg || !flow_cfg->flow_ent)
+ return -ENODEV;
+
+ mutex_lock(&pfvf->mbox.lock);
+
+ /* Request an MCAM entry to install UCAST_IPSEC rule */
+ mcam_req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(&pfvf->mbox);
+ if (!mcam_req) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ mcam_req->contig = false;
+ mcam_req->count = 1;
+ mcam_req->ref_entry = flow_cfg->flow_ent[0];
+ mcam_req->priority = NPC_MCAM_HIGHER_PRIO;
+
+ if (otx2_sync_mbox_msg(&pfvf->mbox)) {
+ err = -ENODEV;
+ goto out;
+ }
+
+ mcam_rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp(&pfvf->mbox.mbox,
+ 0, &mcam_req->hdr);
+
+ /* Store NPC MCAM entry for bookkeeping */
+ inb_ctx_info->npc_mcam_entry = mcam_rsp->entry_list[0];
+
+out:
+ mutex_unlock(&pfvf->mbox.lock);
+ return err;
+}
+
+static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct xfrm_state *x,
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+ struct npc_install_flow_req *req;
+ int err;
+
+ mutex_lock(&pfvf->mbox.lock);
+
+ req = otx2_mbox_alloc_msg_npc_install_flow(&pfvf->mbox);
+ if (!req) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ req->entry = inb_ctx_info->npc_mcam_entry;
+ req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI) | BIT(NPC_DMAC);
+ req->intf = NIX_INTF_RX;
+ req->index = pfvf->ipsec.inb_ipsec_rq;
+ req->match_id = 0xfeed;
+ req->channel = pfvf->hw.rx_chan_base;
+ req->op = NIX_RX_ACTIONOP_UCAST_IPSEC;
+ req->set_cntr = 1;
+ req->packet.spi = x->id.spi;
+ req->mask.spi = 0xffffffff;
+
+ /* Send message to AF */
+ err = otx2_sync_mbox_msg(&pfvf->mbox);
+out:
+ mutex_unlock(&pfvf->mbox.lock);
+ return err;
+}
+
+static int cn10k_inb_delete_flow(struct otx2_nic *pfvf,
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+ struct npc_delete_flow_req *req;
+ int err = 0;
+
+ mutex_lock(&pfvf->mbox.lock);
+
+ req = otx2_mbox_alloc_msg_npc_delete_flow(&pfvf->mbox);
+ if (!req) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ req->entry = inb_ctx_info->npc_mcam_entry;
+
+ /* Send message to AF */
+ err = otx2_sync_mbox_msg(&pfvf->mbox);
+out:
+ mutex_unlock(&pfvf->mbox.lock);
+ return err;
+}
+
+static int cn10k_inb_ena_dis_flow(struct otx2_nic *pfvf,
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info,
+ bool disable)
+{
+ struct npc_mcam_ena_dis_entry_req *req;
+ int err = 0;
+
+ mutex_lock(&pfvf->mbox.lock);
+
+ if (disable)
+ req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(&pfvf->mbox);
+ else
+ req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(&pfvf->mbox);
+ if (!req) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ req->entry = inb_ctx_info->npc_mcam_entry;
+
+ err = otx2_sync_mbox_msg(&pfvf->mbox);
+out:
+ mutex_unlock(&pfvf->mbox.lock);
+ return err;
+}
+
+void cn10k_ipsec_inb_disable_flows(struct otx2_nic *pfvf)
+{
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info;
+
+ list_for_each_entry(inb_ctx_info, &pfvf->ipsec.inb_sw_ctx_list, list) {
+ if (cn10k_inb_ena_dis_flow(pfvf, inb_ctx_info, true)) {
+ netdev_err(pfvf->netdev, "Failed to disable UCAST_IPSEC"
+ " entry %d\n", inb_ctx_info->npc_mcam_entry);
+ continue;
+ }
+ inb_ctx_info->delete_npc_and_match_entry = false;
+ }
+}
+
+static int cn10k_inb_install_spi_to_sa_match_entry(struct otx2_nic *pfvf,
+ struct xfrm_state *x,
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+ struct nix_spi_to_sa_add_req *req;
+ struct nix_spi_to_sa_add_rsp *rsp;
+ int err;
+
+ mutex_lock(&pfvf->mbox.lock);
+ req = otx2_mbox_alloc_msg_nix_spi_to_sa_add(&pfvf->mbox);
+ if (!req) {
+ mutex_unlock(&pfvf->mbox.lock);
+ return -ENOMEM;
+ }
+
+ req->sa_index = inb_ctx_info->sa_index;
+ req->spi_index = be32_to_cpu(x->id.spi);
+ req->match_id = 0xfeed;
+ req->valid = 1;
+
+ /* Send message to AF */
+ err = otx2_sync_mbox_msg(&pfvf->mbox);
+
+ rsp = (struct nix_spi_to_sa_add_rsp *)otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+ inb_ctx_info->hash_index = rsp->hash_index;
+ inb_ctx_info->way = rsp->way;
+
+ mutex_unlock(&pfvf->mbox.lock);
+ return err;
+}
+
+static int cn10k_inb_delete_spi_to_sa_match_entry(struct otx2_nic *pfvf,
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+ struct nix_spi_to_sa_delete_req *req;
+ int err;
+
+ mutex_lock(&pfvf->mbox.lock);
+ req = otx2_mbox_alloc_msg_nix_spi_to_sa_delete(&pfvf->mbox);
+ if (!req) {
+ mutex_unlock(&pfvf->mbox.lock);
+ return -ENOMEM;
+ }
+
+ req->hash_index = inb_ctx_info->hash_index;
+ req->way = inb_ctx_info->way;
+
+ err = otx2_sync_mbox_msg(&pfvf->mbox);
+ mutex_unlock(&pfvf->mbox.lock);
+ return err;
+}
+
static int cn10k_inb_nix_inline_lf_cfg(struct otx2_nic *pfvf)
{
struct nix_inline_ipsec_lf_cfg *req;
@@ -677,6 +865,7 @@ static irqreturn_t cn10k_ipsec_npa_inb_ipsecq_intr_handler(int irq, void *data)
static int cn10k_inb_cpt_init(struct net_device *netdev)
{
struct otx2_nic *pfvf = netdev_priv(netdev);
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info;
int ret = 0, vec;
char *irq_name;
void *ptr;
@@ -737,6 +926,18 @@ static int cn10k_inb_cpt_init(struct net_device *netdev)
else
ret = 0;
+ /* If the driver has any offloaded inbound SA context(s), re-install the
+ * associated SPI-to-SA match and NPC rules. This is generally executed
+ * when the RQs are changed at runtime.
+ */
+ list_for_each_entry(inb_ctx_info, &pfvf->ipsec.inb_sw_ctx_list, list) {
+ cn10k_inb_ena_dis_flow(pfvf, inb_ctx_info, false);
+ cn10k_inb_install_flow(pfvf, inb_ctx_info->x_state, inb_ctx_info);
+ cn10k_inb_install_spi_to_sa_match_entry(pfvf,
+ inb_ctx_info->x_state,
+ inb_ctx_info);
+ }
+
out:
return ret;
}
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index f042cbadf054..aad5ebea64ef 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -329,6 +329,7 @@ bool otx2_sqe_add_sg_ipsec(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
bool cn10k_ipsec_transmit(struct otx2_nic *pf, struct netdev_queue *txq,
struct otx2_snd_queue *sq, struct sk_buff *skb,
int num_segs, int size);
+void cn10k_ipsec_inb_disable_flows(struct otx2_nic *pf);
#else
static inline __maybe_unused int cn10k_ipsec_init(struct net_device *netdev)
{
@@ -359,5 +360,11 @@ cn10k_ipsec_transmit(struct otx2_nic *pf, struct netdev_queue *txq,
{
return true;
}
+
+static inline void __maybe_unused
+cn10k_ipsec_inb_delete_flows(struct otx2_nic *pf)
+{
+}
+
#endif
#endif // CN10K_IPSEC_H
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 0ffc56efcc23..7fcd382cb410 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1714,7 +1714,12 @@ void otx2_free_hw_resources(struct otx2_nic *pf)
if (!otx2_rep_dev(pf->pdev))
cn10k_free_all_ipolicers(pf);
+ /* Delete Inbound IPSec flows if any SA's are installed */
+ if (!list_empty(&pf->ipsec.inb_sw_ctx_list))
+ cn10k_ipsec_inb_disable_flows(pf);
+
mutex_lock(&mbox->lock);
+
/* Reset NIX LF */
free_req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
if (free_req) {
@@ -2045,6 +2050,10 @@ int otx2_open(struct net_device *netdev)
otx2_do_set_rx_mode(pf);
+ /* Re-initialize IPsec flows if any previously installed */
+ if (!list_empty(&pf->ipsec.inb_sw_ctx_list))
+ cn10k_ipsec_ethtool_init(netdev, true);
+
return 0;
err_disable_rxtx:
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 14/15] octeontx2-pf: ipsec: Process CPT metapackets
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (12 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 13/15] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-07 16:30 ` Simon Horman
2025-05-02 13:19 ` [net-next PATCH v1 15/15] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows Tanmay Jagdale
2025-05-05 17:52 ` [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Leon Romanovsky
15 siblings, 1 reply; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
CPT hardware forwards decrypted IPsec packets to NIX via the X2P bus
as metapackets which are of 256 bytes in length. Each metapacket
contains CPT_PARSE_HDR_S and initial bytes of the decrypted packet
that helps NIX RX in classifying and submitting to CPU. Additionally,
CPT also sets BIT(11) of the channel number to indicate that it's a
2nd pass packet from CPT.
Since the metapackets are not complete packets, they don't have to go
through L3/L4 layer length and checksum verification so these are
disabled via the NIX_LF_INLINE_RQ_CFG mailbox during IPsec initialization.
The CPT_PARSE_HDR_S contains a WQE pointer to the complete decrypted
packet. Add code in the rx NAPI handler to parse the header and extract
WQE pointer. Later, use this WQE pointer to construct the skb, set the
XFRM packet mode flags to indicate successful decryption before submitting
it to the network stack.
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../marvell/octeontx2/nic/cn10k_ipsec.c | 61 +++++++++++++++++++
.../marvell/octeontx2/nic/cn10k_ipsec.h | 47 ++++++++++++++
.../marvell/octeontx2/nic/otx2_struct.h | 16 +++++
.../marvell/octeontx2/nic/otx2_txrx.c | 25 +++++++-
4 files changed, 147 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index 91c8f13b6e48..bebf5cdedee4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -346,6 +346,67 @@ static int cn10k_outb_cpt_init(struct net_device *netdev)
return ret;
}
+struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
+ struct nix_rx_sg_s *sg,
+ struct sk_buff *skb,
+ int qidx)
+{
+ struct nix_wqe_rx_s *wqe = NULL;
+ u64 *seg_addr = &sg->seg_addr;
+ struct cpt_parse_hdr_s *cptp;
+ struct xfrm_offload *xo;
+ struct otx2_pool *pool;
+ struct xfrm_state *xs;
+ struct sec_path *sp;
+ u64 *va_ptr;
+ void *va;
+ int i;
+
+ /* CPT_PARSE_HDR_S is present in the beginning of the buffer */
+ va = phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain, *seg_addr));
+
+ /* Convert CPT_PARSE_HDR_S from BE to LE */
+ va_ptr = (u64 *)va;
+ for (i = 0; i < (sizeof(struct cpt_parse_hdr_s) / sizeof(u64)); i++)
+ va_ptr[i] = be64_to_cpu(va_ptr[i]);
+
+ cptp = (struct cpt_parse_hdr_s *)va;
+
+ /* Convert the wqe_ptr from CPT_PARSE_HDR_S to a CPU usable pointer */
+ wqe = (struct nix_wqe_rx_s *)phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain,
+ cptp->wqe_ptr));
+
+ /* Get the XFRM state pointer stored in SA context */
+ va_ptr = pfvf->ipsec.inb_sa->base +
+ (cptp->cookie * pfvf->ipsec.sa_tbl_entry_sz) + 1024;
+ xs = (struct xfrm_state *)*va_ptr;
+
+ /* Set XFRM offload status and flags for successful decryption */
+ sp = secpath_set(skb);
+ if (!sp) {
+ netdev_err(pfvf->netdev, "Failed to secpath_set\n");
+ wqe = NULL;
+ goto err_out;
+ }
+
+ rcu_read_lock();
+ xfrm_state_hold(xs);
+ rcu_read_unlock();
+
+ sp->xvec[sp->len++] = xs;
+ sp->olen++;
+
+ xo = xfrm_offload(skb);
+ xo->flags = CRYPTO_DONE;
+ xo->status = CRYPTO_SUCCESS;
+
+err_out:
+ /* Free the metapacket memory here since it's not needed anymore */
+ pool = &pfvf->qset.pool[qidx];
+ otx2_free_bufs(pfvf, pool, *seg_addr - OTX2_HEAD_ROOM, pfvf->rbsize);
+ return wqe;
+}
+
static int cn10k_inb_alloc_mcam_entry(struct otx2_nic *pfvf,
struct cn10k_inb_sw_ctx_info *inb_ctx_info)
{
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index aad5ebea64ef..68046e377486 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -8,6 +8,7 @@
#define CN10K_IPSEC_H
#include <linux/types.h>
+#include "otx2_struct.h"
DECLARE_STATIC_KEY_FALSE(cn10k_ipsec_sa_enabled);
@@ -302,6 +303,41 @@ struct cpt_sg_s {
u64 rsvd_63_50 : 14;
};
+/* CPT Parse Header Structure for Inbound packets */
+struct cpt_parse_hdr_s {
+ /* Word 0 */
+ u64 cookie : 32;
+ u64 match_id : 16;
+ u64 err_sum : 1;
+ u64 reas_sts : 4;
+ u64 reserved_53 : 1;
+ u64 et_owr : 1;
+ u64 pkt_fmt : 1;
+ u64 pad_len : 3;
+ u64 num_frags : 3;
+ u64 pkt_out : 2;
+
+ /* Word 1 */
+ u64 wqe_ptr;
+
+ /* Word 2 */
+ u64 frag_age : 16;
+ u64 res_32_16 : 16;
+ u64 pf_func : 16;
+ u64 il3_off : 8;
+ u64 fi_pad : 3;
+ u64 fi_offset : 5;
+
+ /* Word 3 */
+ u64 hw_ccode : 8;
+ u64 uc_ccode : 8;
+ u64 res3_32_16 : 16;
+ u64 spi : 32;
+
+ /* Word 4 */
+ u64 misc;
+};
+
/* CPT LF_INPROG Register */
#define CPT_LF_INPROG_INFLIGHT GENMASK_ULL(8, 0)
#define CPT_LF_INPROG_GRB_CNT GENMASK_ULL(39, 32)
@@ -330,6 +366,10 @@ bool cn10k_ipsec_transmit(struct otx2_nic *pf, struct netdev_queue *txq,
struct otx2_snd_queue *sq, struct sk_buff *skb,
int num_segs, int size);
void cn10k_ipsec_inb_disable_flows(struct otx2_nic *pf);
+struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
+ struct nix_rx_sg_s *sg,
+ struct sk_buff *skb,
+ int qidx);
#else
static inline __maybe_unused int cn10k_ipsec_init(struct net_device *netdev)
{
@@ -366,5 +406,12 @@ cn10k_ipsec_inb_delete_flows(struct otx2_nic *pf)
{
}
+struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
+ struct nix_rx_sg_s *sg,
+ struct sk_buff *skb,
+ int qidx)
+{
+ return NULL;
+}
#endif
#endif // CN10K_IPSEC_H
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h
index 4e5899d8fa2e..506fab414b7e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h
@@ -175,6 +175,22 @@ struct nix_cqe_tx_s {
struct nix_send_comp_s comp;
};
+/* NIX WQE header structure */
+struct nix_wqe_hdr_s {
+ u64 flow_tag : 32;
+ u64 tt : 2;
+ u64 reserved_34_43 : 10;
+ u64 node : 2;
+ u64 q : 14;
+ u64 wqe_type : 4;
+};
+
+struct nix_wqe_rx_s {
+ struct nix_wqe_hdr_s hdr;
+ struct nix_rx_parse_s parse;
+ struct nix_rx_sg_s sg;
+};
+
/* NIX SQE header structure */
struct nix_sqe_hdr_s {
u64 total : 18; /* W0 */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
index 9593627b35a3..e9d0e27ffd0b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
@@ -205,6 +205,9 @@ static bool otx2_skb_add_frag(struct otx2_nic *pfvf, struct sk_buff *skb,
}
}
+ if (parse->chan & 0x800)
+ off = 0;
+
page = virt_to_page(va);
if (likely(skb_shinfo(skb)->nr_frags < MAX_SKB_FRAGS)) {
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
@@ -333,6 +336,7 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
struct nix_cqe_rx_s *cqe, bool *need_xdp_flush)
{
struct nix_rx_parse_s *parse = &cqe->parse;
+ struct nix_wqe_rx_s *orig_pkt_wqe = NULL;
struct nix_rx_sg_s *sg = &cqe->sg;
struct sk_buff *skb = NULL;
void *end, *start;
@@ -355,8 +359,25 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
if (unlikely(!skb))
return;
- start = (void *)sg;
- end = start + ((cqe->parse.desc_sizem1 + 1) * 16);
+ if (parse->chan & 0x800) {
+ orig_pkt_wqe = cn10k_ipsec_process_cpt_metapkt(pfvf, sg, skb, cq->cq_idx);
+ if (!orig_pkt_wqe) {
+ netdev_err(pfvf->netdev, "Invalid WQE in CPT metapacket\n");
+ napi_free_frags(napi);
+ cq->pool_ptrs++;
+ return;
+ }
+ /* Switch *sg to the orig_pkt_wqe's *sg which has the actual
+ * complete decrypted packet by CPT.
+ */
+ sg = &orig_pkt_wqe->sg;
+ start = (void *)sg;
+ end = start + ((orig_pkt_wqe->parse.desc_sizem1 + 1) * 16);
+ } else {
+ start = (void *)sg;
+ end = start + ((cqe->parse.desc_sizem1 + 1) * 16);
+ }
+
while (start < end) {
sg = (struct nix_rx_sg_s *)start;
seg_addr = &sg->seg_addr;
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [net-next PATCH v1 15/15] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (13 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 14/15] octeontx2-pf: ipsec: Process CPT metapackets Tanmay Jagdale
@ 2025-05-02 13:19 ` Tanmay Jagdale
2025-05-07 6:42 ` kernel test robot
2025-05-07 18:31 ` Simon Horman
2025-05-05 17:52 ` [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Leon Romanovsky
15 siblings, 2 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-02 13:19 UTC (permalink / raw)
To: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu
Cc: linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Tanmay Jagdale
Add XFRM state hook for inbound flows and configure the following:
- Install an NPC rule to classify the 1st pass IPsec packets and
direct them to the dedicated RQ
- Allocate a free entry from the SA table and populate it with the
SA context details based on xfrm state data.
- Create a mapping of the SPI value to the SA table index. This is
used by NIXRX to calculate the exact SA context pointer address
based on the SPI in the packet.
- Prepare the CPT SA context to decrypt buffer in place and the
write it the CPT hardware via LMT operation.
- When the XFRM state is deleted, clear this SA in CPT hardware.
Also add XFRM Policy hooks to allow successful offload of inbound
PACKET_MODE.
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
.../marvell/octeontx2/nic/cn10k_ipsec.c | 449 ++++++++++++++++--
1 file changed, 419 insertions(+), 30 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index bebf5cdedee4..6441598c7e0f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -448,7 +448,7 @@ static int cn10k_inb_alloc_mcam_entry(struct otx2_nic *pfvf,
return err;
}
-static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct xfrm_state *x,
+static int cn10k_inb_install_flow(struct otx2_nic *pfvf,
struct cn10k_inb_sw_ctx_info *inb_ctx_info)
{
struct npc_install_flow_req *req;
@@ -463,14 +463,14 @@ static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct xfrm_state *x,
}
req->entry = inb_ctx_info->npc_mcam_entry;
- req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI) | BIT(NPC_DMAC);
+ req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI);
req->intf = NIX_INTF_RX;
req->index = pfvf->ipsec.inb_ipsec_rq;
req->match_id = 0xfeed;
req->channel = pfvf->hw.rx_chan_base;
req->op = NIX_RX_ACTIONOP_UCAST_IPSEC;
req->set_cntr = 1;
- req->packet.spi = x->id.spi;
+ req->packet.spi = inb_ctx_info->spi;
req->mask.spi = 0xffffffff;
/* Send message to AF */
@@ -993,7 +993,7 @@ static int cn10k_inb_cpt_init(struct net_device *netdev)
*/
list_for_each_entry(inb_ctx_info, &pfvf->ipsec.inb_sw_ctx_list, list) {
cn10k_inb_ena_dis_flow(pfvf, inb_ctx_info, false);
- cn10k_inb_install_flow(pfvf, inb_ctx_info->x_state, inb_ctx_info);
+ cn10k_inb_install_flow(pfvf, inb_ctx_info);
cn10k_inb_install_spi_to_sa_match_entry(pfvf,
inb_ctx_info->x_state,
inb_ctx_info);
@@ -1035,6 +1035,19 @@ static int cn10k_outb_cpt_clean(struct otx2_nic *pf)
return ret;
}
+static u32 cn10k_inb_alloc_sa(struct otx2_nic *pf, struct xfrm_state *x)
+{
+ u32 sa_index = 0;
+
+ sa_index = find_first_zero_bit(pf->ipsec.inb_sa_table, CN10K_IPSEC_INB_MAX_SA);
+ if (sa_index >= CN10K_IPSEC_INB_MAX_SA)
+ return sa_index;
+
+ set_bit(sa_index, pf->ipsec.inb_sa_table);
+
+ return sa_index;
+}
+
static void cn10k_cpt_inst_flush(struct otx2_nic *pf, struct cpt_inst_s *inst,
u64 size)
{
@@ -1149,6 +1162,137 @@ static int cn10k_outb_write_sa(struct otx2_nic *pf, struct qmem *sa_info)
return ret;
}
+static int cn10k_inb_write_sa(struct otx2_nic *pf,
+ struct xfrm_state *x,
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+ dma_addr_t res_iova, dptr_iova, sa_iova;
+ struct cn10k_rx_sa_s *sa_dptr, *sa_cptr;
+ struct cpt_inst_s inst;
+ u32 sa_size, off;
+ struct cpt_res_s *res;
+ u64 reg_val;
+ int ret;
+
+ res = dma_alloc_coherent(pf->dev, sizeof(struct cpt_res_s),
+ &res_iova, GFP_ATOMIC);
+ if (!res)
+ return -ENOMEM;
+
+ sa_cptr = inb_ctx_info->sa_entry;
+ sa_iova = inb_ctx_info->sa_iova;
+ sa_size = sizeof(struct cn10k_rx_sa_s);
+
+ sa_dptr = dma_alloc_coherent(pf->dev, sa_size, &dptr_iova, GFP_ATOMIC);
+ if (!sa_dptr) {
+ dma_free_coherent(pf->dev, sizeof(struct cpt_res_s), res,
+ res_iova);
+ return -ENOMEM;
+ }
+
+ for (off = 0; off < (sa_size / 8); off++)
+ *((u64 *)sa_dptr + off) = cpu_to_be64(*((u64 *)sa_cptr + off));
+
+ memset(&inst, 0, sizeof(struct cpt_inst_s));
+
+ res->compcode = 0;
+ inst.res_addr = res_iova;
+ inst.dptr = (u64)dptr_iova;
+ inst.param2 = sa_size >> 3;
+ inst.dlen = sa_size;
+ inst.opcode_major = CN10K_IPSEC_MAJOR_OP_WRITE_SA;
+ inst.opcode_minor = CN10K_IPSEC_MINOR_OP_WRITE_SA;
+ inst.cptr = sa_iova;
+ inst.ctx_val = 1;
+ inst.egrp = CN10K_DEF_CPT_IPSEC_EGRP;
+
+ /* Re-use Outbound CPT LF to install Ingress SAs as well because
+ * the driver does not own the ingress CPT LF.
+ */
+ pf->ipsec.io_addr = (__force u64)otx2_get_regaddr(pf, CN10K_CPT_LF_NQX(0));
+ cn10k_cpt_inst_flush(pf, &inst, sizeof(struct cpt_inst_s));
+ dmb(sy);
+
+ ret = cn10k_wait_for_cpt_respose(pf, res);
+ if (ret)
+ goto out;
+
+ /* Trigger CTX flush to write dirty data back to DRAM */
+ reg_val = FIELD_PREP(GENMASK_ULL(45, 0), sa_iova >> 7);
+ otx2_write64(pf, CN10K_CPT_LF_CTX_FLUSH, reg_val);
+
+out:
+ dma_free_coherent(pf->dev, sa_size, sa_dptr, dptr_iova);
+ dma_free_coherent(pf->dev, sizeof(struct cpt_res_s), res, res_iova);
+ return ret;
+}
+
+static void cn10k_xfrm_inb_prepare_sa(struct otx2_nic *pf, struct xfrm_state *x,
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+ struct cn10k_rx_sa_s *sa_entry = inb_ctx_info->sa_entry;
+ int key_len = (x->aead->alg_key_len + 7) / 8;
+ u8 *key = x->aead->alg_key;
+ u32 sa_size = sizeof(struct cn10k_rx_sa_s);
+ u64 *tmp_key;
+ u32 *tmp_salt;
+ int idx;
+
+ memset(sa_entry, 0, sizeof(struct cn10k_rx_sa_s));
+
+ /* Disable ESN for now */
+ sa_entry->esn_en = 0;
+
+ /* HW context offset is word-31 */
+ sa_entry->hw_ctx_off = 31;
+ sa_entry->pkind = NPC_RX_CPT_HDR_PKIND;
+ sa_entry->eth_ovrwr = 1;
+ sa_entry->pkt_output = 1;
+ sa_entry->pkt_format = 1;
+ sa_entry->orig_pkt_free = 0;
+ /* context push size is up to word 31 */
+ sa_entry->ctx_push_size = 31 + 1;
+ /* context size, 128 Byte aligned up */
+ sa_entry->ctx_size = (sa_size / OTX2_ALIGN) & 0xF;
+
+ sa_entry->cookie = inb_ctx_info->sa_index;
+
+ /* 1 word (??) prepanded to context header size */
+ sa_entry->ctx_hdr_size = 1;
+ /* Mark SA entry valid */
+ sa_entry->aop_valid = 1;
+
+ sa_entry->sa_dir = 0; /* Inbound */
+ sa_entry->ipsec_protocol = 1; /* ESP */
+ /* Default to Transport Mode */
+ if (x->props.mode == XFRM_MODE_TUNNEL)
+ sa_entry->ipsec_mode = 1; /* Tunnel Mode */
+
+ sa_entry->et_ovrwr_ddr_en = 1;
+ sa_entry->enc_type = 5; /* AES-GCM only */
+ sa_entry->aes_key_len = 1; /* AES key length 128 */
+ sa_entry->l2_l3_hdr_on_error = 1;
+ sa_entry->spi = cpu_to_be32(x->id.spi);
+
+ /* Last 4 bytes are salt */
+ key_len -= 4;
+ memcpy(sa_entry->cipher_key, key, key_len);
+ tmp_key = (u64 *)sa_entry->cipher_key;
+
+ for (idx = 0; idx < key_len / 8; idx++)
+ tmp_key[idx] = be64_to_cpu(tmp_key[idx]);
+
+ memcpy(&sa_entry->iv_gcm_salt, key + key_len, 4);
+ tmp_salt = (u32 *)&sa_entry->iv_gcm_salt;
+ *tmp_salt = be32_to_cpu(*tmp_salt);
+
+ /* Write SA context data to memory before enabling */
+ wmb();
+
+ /* Enable SA */
+ sa_entry->sa_valid = 1;
+}
+
static int cn10k_ipsec_get_hw_ctx_offset(void)
{
/* Offset on Hardware-context offset in word */
@@ -1256,11 +1400,6 @@ static int cn10k_ipsec_validate_state(struct xfrm_state *x,
"Only IPv4/v6 xfrm states may be offloaded");
return -EINVAL;
}
- if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
- NL_SET_ERR_MSG_MOD(extack,
- "Cannot offload other than crypto-mode");
- return -EINVAL;
- }
if (x->props.mode != XFRM_MODE_TRANSPORT &&
x->props.mode != XFRM_MODE_TUNNEL) {
NL_SET_ERR_MSG_MOD(extack,
@@ -1272,11 +1411,6 @@ static int cn10k_ipsec_validate_state(struct xfrm_state *x,
"Only ESP xfrm state may be offloaded");
return -EINVAL;
}
- if (x->encap) {
- NL_SET_ERR_MSG_MOD(extack,
- "Encapsulated xfrm state may not be offloaded");
- return -EINVAL;
- }
if (!x->aead) {
NL_SET_ERR_MSG_MOD(extack,
"Cannot offload xfrm states without aead");
@@ -1316,8 +1450,96 @@ static int cn10k_ipsec_validate_state(struct xfrm_state *x,
static int cn10k_ipsec_inb_add_state(struct xfrm_state *x,
struct netlink_ext_ack *extack)
{
- NL_SET_ERR_MSG_MOD(extack, "xfrm inbound offload not supported");
- return -EOPNOTSUPP;
+ struct net_device *netdev = x->xso.dev;
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info = NULL, *inb_ctx;
+ bool enable_rule = false;
+ struct otx2_nic *pf;
+ u64 *sa_offset_ptr;
+ u32 sa_index = 0;
+ int err = 0;
+
+ pf = netdev_priv(netdev);
+
+ /* If XFRM policy was added before state, then the inb_ctx_info instance
+ * would be allocated there.
+ */
+ list_for_each_entry(inb_ctx, &pf->ipsec.inb_sw_ctx_list, list) {
+ if (inb_ctx->spi == x->id.spi) {
+ inb_ctx_info = inb_ctx;
+ enable_rule = true;
+ break;
+ }
+ }
+
+ if (!inb_ctx_info) {
+ /* Allocate a structure to track SA related info in driver */
+ inb_ctx_info = devm_kzalloc(pf->dev, sizeof(*inb_ctx_info), GFP_KERNEL);
+ if (!inb_ctx_info)
+ return -ENOMEM;
+
+ /* Stash pointer in the xfrm offload handle */
+ x->xso.offload_handle = (unsigned long)inb_ctx_info;
+ }
+
+ sa_index = cn10k_inb_alloc_sa(pf, x);
+ if (sa_index >= CN10K_IPSEC_INB_MAX_SA) {
+ netdev_err(netdev, "Failed to find free entry in SA Table\n");
+ err = -ENOMEM;
+ goto err_out;
+ }
+
+ /* Fill in information for bookkeeping */
+ inb_ctx_info->sa_index = sa_index;
+ inb_ctx_info->spi = x->id.spi;
+ inb_ctx_info->sa_entry = pf->ipsec.inb_sa->base +
+ (sa_index * pf->ipsec.sa_tbl_entry_sz);
+ inb_ctx_info->sa_iova = pf->ipsec.inb_sa->iova +
+ (sa_index * pf->ipsec.sa_tbl_entry_sz);
+ inb_ctx_info->x_state = x;
+
+ /* Store XFRM state pointer in SA context at an offset of 1KB.
+ * It will be later used in the rcv_pkt_handler to associate
+ * an skb with XFRM state.
+ */
+ sa_offset_ptr = pf->ipsec.inb_sa->base +
+ (sa_index * pf->ipsec.sa_tbl_entry_sz) + 1024;
+ *sa_offset_ptr = (u64)x;
+
+ err = cn10k_inb_install_spi_to_sa_match_entry(pf, x, inb_ctx_info);
+ if (err) {
+ netdev_err(netdev, "Failed to install Inbound IPSec exact match entry\n");
+ goto err_out;
+ }
+
+ cn10k_xfrm_inb_prepare_sa(pf, x, inb_ctx_info);
+
+ netdev_dbg(netdev, "inb_ctx_info: sa_index:%d spi:0x%x mcam_entry:%d"
+ " hash_index:0x%x way:0x%x\n",
+ inb_ctx_info->sa_index, inb_ctx_info->spi,
+ inb_ctx_info->npc_mcam_entry, inb_ctx_info->hash_index,
+ inb_ctx_info->way);
+
+ err = cn10k_inb_write_sa(pf, x, inb_ctx_info);
+ if (err)
+ netdev_err(netdev, "Error writing inbound SA\n");
+
+ /* Enable NPC rule if policy was already installed */
+ if (enable_rule) {
+ err = cn10k_inb_ena_dis_flow(pf, inb_ctx_info, false);
+ if (err)
+ netdev_err(netdev, "Failed to enable rule\n");
+ } else {
+ /* All set, add ctx_info to the list */
+ list_add_tail(&inb_ctx_info->list, &pf->ipsec.inb_sw_ctx_list);
+ }
+
+ cn10k_cpt_device_set_available(pf);
+ return err;
+
+err_out:
+ x->xso.offload_handle = 0;
+ devm_kfree(pf->dev, inb_ctx_info);
+ return err;
}
static int cn10k_ipsec_outb_add_state(struct xfrm_state *x,
@@ -1329,10 +1551,6 @@ static int cn10k_ipsec_outb_add_state(struct xfrm_state *x,
struct otx2_nic *pf;
int err;
- err = cn10k_ipsec_validate_state(x, extack);
- if (err)
- return err;
-
pf = netdev_priv(netdev);
err = qmem_alloc(pf->dev, &sa_info, pf->ipsec.sa_size, OTX2_ALIGN);
@@ -1360,10 +1578,52 @@ static int cn10k_ipsec_outb_add_state(struct xfrm_state *x,
static int cn10k_ipsec_add_state(struct xfrm_state *x,
struct netlink_ext_ack *extack)
{
+ int err;
+
+ err = cn10k_ipsec_validate_state(x, extack);
+ if (err)
+ return err;
+
if (x->xso.dir == XFRM_DEV_OFFLOAD_IN)
return cn10k_ipsec_inb_add_state(x, extack);
else
return cn10k_ipsec_outb_add_state(x, extack);
+
+ return err;
+}
+
+static void cn10k_ipsec_inb_del_state(struct otx2_nic *pf, struct xfrm_state *x)
+{
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info;
+ struct cn10k_rx_sa_s *sa_entry;
+ struct net_device *netdev = x->xso.dev;
+ int err = 0;
+
+ /* 1. Find SPI to SA entry */
+ inb_ctx_info = (struct cn10k_inb_sw_ctx_info *)x->xso.offload_handle;
+
+ if (inb_ctx_info->spi != be32_to_cpu(x->id.spi)) {
+ netdev_err(netdev, "SPI Mismatch (ctx) 0x%x != 0x%x (xfrm)\n",
+ inb_ctx_info->spi, be32_to_cpu(x->id.spi));
+ return;
+ }
+
+ /* 2. Delete SA in CPT HW */
+ sa_entry = inb_ctx_info->sa_entry;
+ memset(sa_entry, 0, sizeof(struct cn10k_rx_sa_s));
+
+ sa_entry->ctx_push_size = 31 + 1;
+ sa_entry->ctx_size = (sizeof(struct cn10k_rx_sa_s) / OTX2_ALIGN) & 0xF;
+ sa_entry->aop_valid = 1;
+
+ if (cn10k_cpt_device_set_inuse(pf)) {
+ err = cn10k_inb_write_sa(pf, x, inb_ctx_info);
+ if (err)
+ netdev_err(netdev, "Error (%d) deleting INB SA\n", err);
+ cn10k_cpt_device_set_available(pf);
+ }
+
+ x->xso.offload_handle = 0;
}
static void cn10k_ipsec_del_state(struct xfrm_state *x)
@@ -1374,11 +1634,11 @@ static void cn10k_ipsec_del_state(struct xfrm_state *x)
struct otx2_nic *pf;
int err;
- if (x->xso.dir == XFRM_DEV_OFFLOAD_IN)
- return;
-
pf = netdev_priv(netdev);
+ if (x->xso.dir == XFRM_DEV_OFFLOAD_IN)
+ return cn10k_ipsec_inb_del_state(pf, x);
+
sa_info = (struct qmem *)x->xso.offload_handle;
sa_entry = (struct cn10k_tx_sa_s *)sa_info->base;
memset(sa_entry, 0, sizeof(struct cn10k_tx_sa_s));
@@ -1397,13 +1657,112 @@ static void cn10k_ipsec_del_state(struct xfrm_state *x)
/* If no more SA's then update netdev feature for potential change
* in NETIF_F_HW_ESP.
*/
- if (!--pf->ipsec.outb_sa_count)
- queue_work(pf->ipsec.sa_workq, &pf->ipsec.sa_work);
+ pf->ipsec.outb_sa_count--;
+ queue_work(pf->ipsec.sa_workq, &pf->ipsec.sa_work);
+}
+
+static int cn10k_ipsec_policy_add(struct xfrm_policy *x,
+ struct netlink_ext_ack *extack)
+{
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info = NULL, *inb_ctx;
+ struct net_device *netdev = x->xdo.dev;
+ struct otx2_nic *pf;
+ int ret = 0;
+ bool disable_rule = true;
+
+ if (x->xdo.dir != XFRM_DEV_OFFLOAD_IN) {
+ netdev_err(netdev, "ERR: Can only offload Inbound policies\n");
+ ret = -EINVAL;
+ }
+
+ if (x->xdo.type != XFRM_DEV_OFFLOAD_PACKET) {
+ netdev_err(netdev, "ERR: Only Packet mode supported\n");
+ ret = -EINVAL;
+ }
+
+ pf = netdev_priv(netdev);
+
+ /* If XFRM state was added before policy, then the inb_ctx_info instance
+ * would be allocated there.
+ */
+ list_for_each_entry(inb_ctx, &pf->ipsec.inb_sw_ctx_list, list) {
+ if (inb_ctx->spi == x->xfrm_vec[0].id.spi) {
+ inb_ctx_info = inb_ctx;
+ disable_rule = false;
+ break;
+ }
+ }
+
+ if (!inb_ctx_info) {
+ /* Allocate a structure to track SA related info in driver */
+ inb_ctx_info = devm_kzalloc(pf->dev, sizeof(*inb_ctx_info), GFP_KERNEL);
+ if (!inb_ctx_info)
+ return -ENOMEM;
+
+ inb_ctx_info->spi = x->xfrm_vec[0].id.spi;
+ }
+
+ ret = cn10k_inb_alloc_mcam_entry(pf, inb_ctx_info);
+ if (ret) {
+ netdev_err(netdev, "Failed to allocate MCAM entry for Inbound IPSec flow\n");
+ goto err_out;
+ }
+
+ ret = cn10k_inb_install_flow(pf, inb_ctx_info);
+ if (ret) {
+ netdev_err(netdev, "Failed to install Inbound IPSec flow\n");
+ goto err_out;
+ }
+
+ /* Leave rule in a disabled state until xfrm_state add is completed */
+ if (disable_rule) {
+ ret = cn10k_inb_ena_dis_flow(pf, inb_ctx_info, true);
+ if (ret)
+ netdev_err(netdev, "Failed to disable rule\n");
+
+ /* All set, add ctx_info to the list */
+ list_add_tail(&inb_ctx_info->list, &pf->ipsec.inb_sw_ctx_list);
+ }
+
+ /* Stash pointer in the xfrm offload handle */
+ x->xdo.offload_handle = (unsigned long)inb_ctx_info;
+
+err_out:
+ return ret;
+}
+
+static void cn10k_ipsec_policy_delete(struct xfrm_policy *x)
+{
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info;
+ struct net_device *netdev = x->xdo.dev;
+ struct otx2_nic *pf;
+
+ if (!x->xdo.offload_handle)
+ return;
+
+ pf = netdev_priv(netdev);
+ inb_ctx_info = (struct cn10k_inb_sw_ctx_info *)x->xdo.offload_handle;
+
+ /* Schedule a workqueue to free NPC rule and SPI-to-SA match table
+ * entry because they are freed via a mailbox call which can sleep
+ * and the delete policy routine from XFRM stack is called in an
+ * atomic context.
+ */
+ inb_ctx_info->delete_npc_and_match_entry = true;
+ queue_work(pf->ipsec.sa_workq, &pf->ipsec.sa_work);
+}
+
+static void cn10k_ipsec_policy_free(struct xfrm_policy *x)
+{
+ return;
}
static const struct xfrmdev_ops cn10k_ipsec_xfrmdev_ops = {
.xdo_dev_state_add = cn10k_ipsec_add_state,
.xdo_dev_state_delete = cn10k_ipsec_del_state,
+ .xdo_dev_policy_add = cn10k_ipsec_policy_add,
+ .xdo_dev_policy_delete = cn10k_ipsec_policy_delete,
+ .xdo_dev_policy_free = cn10k_ipsec_policy_free,
};
static void cn10k_ipsec_sa_wq_handler(struct work_struct *work)
@@ -1411,12 +1770,42 @@ static void cn10k_ipsec_sa_wq_handler(struct work_struct *work)
struct cn10k_ipsec *ipsec = container_of(work, struct cn10k_ipsec,
sa_work);
struct otx2_nic *pf = container_of(ipsec, struct otx2_nic, ipsec);
+ struct cn10k_inb_sw_ctx_info *inb_ctx_info, *tmp;
+ int err;
+
+ list_for_each_entry_safe(inb_ctx_info, tmp, &pf->ipsec.inb_sw_ctx_list,
+ list) {
+ if (!inb_ctx_info->delete_npc_and_match_entry)
+ continue;
+
+ /* 3. Delete all the associated NPC rules associated */
+ err = cn10k_inb_delete_flow(pf, inb_ctx_info);
+ if (err) {
+ netdev_err(pf->netdev, "Failed to free UCAST_IPSEC entry %d\n",
+ inb_ctx_info->npc_mcam_entry);
+ }
+
+ /* 4. Remove SPI_TO_SA exact match entry */
+ err = cn10k_inb_delete_spi_to_sa_match_entry(pf, inb_ctx_info);
+ if (err)
+ netdev_err(pf->netdev, "Failed to delete spi_to_sa_match_entry\n");
+
+ inb_ctx_info->delete_npc_and_match_entry = false;
+
+ /* 5. Finally clear the entry from the SA Table */
+ clear_bit(inb_ctx_info->sa_index, pf->ipsec.inb_sa_table);
- /* Disable static branch when no more SA enabled */
- static_branch_disable(&cn10k_ipsec_sa_enabled);
- rtnl_lock();
- netdev_update_features(pf->netdev);
- rtnl_unlock();
+ /* 6. Free the inb_ctx_info */
+ list_del(&inb_ctx_info->list);
+ devm_kfree(pf->dev, inb_ctx_info);
+ }
+
+ if (list_empty(&pf->ipsec.inb_sw_ctx_list) && !pf->ipsec.outb_sa_count) {
+ static_branch_disable(&cn10k_ipsec_sa_enabled);
+ rtnl_lock();
+ netdev_update_features(pf->netdev);
+ rtnl_unlock();
+ }
}
static int cn10k_ipsec_configure_cpt_bpid(struct otx2_nic *pfvf)
--
2.43.0
^ permalink raw reply related [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation
2025-05-02 13:19 ` [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation Tanmay Jagdale
@ 2025-05-03 16:12 ` Kalesh Anakkur Purayil
2025-05-13 5:08 ` Tanmay Jagdale
2025-05-07 12:45 ` Simon Horman
1 sibling, 1 reply; 43+ messages in thread
From: Kalesh Anakkur Purayil @ 2025-05-03 16:12 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian, Kiran Kumar K,
Nithin Dabilpuram
[-- Attachment #1: Type: text/plain, Size: 18561 bytes --]
On Fri, May 2, 2025 at 6:56 PM Tanmay Jagdale <tanmay@marvell.com> wrote:
>
> From: Kiran Kumar K <kirankumark@marvell.com>
>
> In case of IPsec, the inbound SPI can be random. HW supports mapping
> SPI to an arbitrary SA index. SPI to SA index is done using a lookup
> in NPC cam entry with key as SPI, MATCH_ID, LFID. Adding Mbox API
> changes to configure the match table.
>
> Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
> ---
> .../ethernet/marvell/octeontx2/af/Makefile | 2 +-
> .../net/ethernet/marvell/octeontx2/af/mbox.h | 27 +++
> .../net/ethernet/marvell/octeontx2/af/rvu.c | 4 +
> .../net/ethernet/marvell/octeontx2/af/rvu.h | 13 ++
> .../ethernet/marvell/octeontx2/af/rvu_nix.c | 6 +
> .../marvell/octeontx2/af/rvu_nix_spi.c | 220 ++++++++++++++++++
> .../ethernet/marvell/octeontx2/af/rvu_reg.h | 4 +
> 7 files changed, 275 insertions(+), 1 deletion(-)
> create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
>
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
> index ccea37847df8..49318017f35f 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
> @@ -8,7 +8,7 @@ obj-$(CONFIG_OCTEONTX2_MBOX) += rvu_mbox.o
> obj-$(CONFIG_OCTEONTX2_AF) += rvu_af.o
>
> rvu_mbox-y := mbox.o rvu_trace.o
> -rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \
> +rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o rvu_nix_spi.o \
> rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \
> rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o rvu_switch.o \
> rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o \
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> index 715efcc04c9e..5cebf10a15a7 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> @@ -326,6 +326,10 @@ M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg, \
> M(NIX_LF_INLINE_RQ_CFG, 0x8024, nix_lf_inline_rq_cfg, \
> nix_rq_cpt_field_mask_cfg_req, \
> msg_rsp) \
> +M(NIX_SPI_TO_SA_ADD, 0x8026, nix_spi_to_sa_add, nix_spi_to_sa_add_req, \
> + nix_spi_to_sa_add_rsp) \
> +M(NIX_SPI_TO_SA_DELETE, 0x8027, nix_spi_to_sa_delete, nix_spi_to_sa_delete_req, \
> + msg_rsp) \
> M(NIX_MCAST_GRP_CREATE, 0x802b, nix_mcast_grp_create, nix_mcast_grp_create_req, \
> nix_mcast_grp_create_rsp) \
> M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req, \
> @@ -880,6 +884,29 @@ enum nix_rx_vtag0_type {
> NIX_AF_LFX_RX_VTAG_TYPE7,
> };
>
> +/* For SPI to SA index add */
> +struct nix_spi_to_sa_add_req {
> + struct mbox_msghdr hdr;
> + u32 sa_index;
> + u32 spi_index;
> + u16 match_id;
> + bool valid;
> +};
> +
> +struct nix_spi_to_sa_add_rsp {
> + struct mbox_msghdr hdr;
> + u16 hash_index;
> + u8 way;
> + u8 is_duplicate;
> +};
> +
> +/* To free SPI to SA index */
> +struct nix_spi_to_sa_delete_req {
> + struct mbox_msghdr hdr;
> + u16 hash_index;
> + u8 way;
> +};
> +
> /* For NIX LF context alloc and init */
> struct nix_lf_alloc_req {
> struct mbox_msghdr hdr;
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
> index ea346e59835b..2b7c09bb24e1 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
> @@ -90,6 +90,9 @@ static void rvu_setup_hw_capabilities(struct rvu *rvu)
>
> if (is_rvu_npc_hash_extract_en(rvu))
> hw->cap.npc_hash_extract = true;
> +
> + if (is_rvu_nix_spi_to_sa_en(rvu))
> + hw->cap.spi_to_sas = 0x2000;
> }
>
> /* Poll a RVU block's register 'offset', for a 'zero'
> @@ -2723,6 +2726,7 @@ static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc)
> rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NPA);
> rvu_reset_lmt_map_tbl(rvu, pcifunc);
> rvu_detach_rsrcs(rvu, NULL, pcifunc);
> +
> /* In scenarios where PF/VF drivers detach NIXLF without freeing MCAM
> * entries, check and free the MCAM entries explicitly to avoid leak.
> * Since LF is detached use LF number as -1.
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> index 71407f6318ec..42fc3e762bc0 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> @@ -395,6 +395,7 @@ struct hw_cap {
> u16 nix_txsch_per_cgx_lmac; /* Max Q's transmitting to CGX LMAC */
> u16 nix_txsch_per_lbk_lmac; /* Max Q's transmitting to LBK LMAC */
> u16 nix_txsch_per_sdp_lmac; /* Max Q's transmitting to SDP LMAC */
> + u16 spi_to_sas; /* Num of SPI to SA index */
> bool nix_fixed_txschq_mapping; /* Schq mapping fixed or flexible */
> bool nix_shaping; /* Is shaping and coloring supported */
> bool nix_shaper_toggle_wait; /* Shaping toggle needs poll/wait */
> @@ -800,6 +801,17 @@ static inline bool is_rvu_npc_hash_extract_en(struct rvu *rvu)
> return true;
> }
>
> +static inline bool is_rvu_nix_spi_to_sa_en(struct rvu *rvu)
> +{
> + u64 nix_const2;
> +
> + nix_const2 = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST2);
> + if ((nix_const2 >> 48) & 0xffff)
> + return true;
> +
> + return false;
> +}
> +
> static inline u16 rvu_nix_chan_cgx(struct rvu *rvu, u8 cgxid,
> u8 lmacid, u8 chan)
> {
> @@ -992,6 +1004,7 @@ int nix_get_struct_ptrs(struct rvu *rvu, u16 pcifunc,
> struct nix_hw **nix_hw, int *blkaddr);
> int rvu_nix_setup_ratelimit_aggr(struct rvu *rvu, u16 pcifunc,
> u16 rq_idx, u16 match_id);
> +int rvu_nix_free_spi_to_sa_table(struct rvu *rvu, uint16_t pcifunc);
> int nix_aq_context_read(struct rvu *rvu, struct nix_hw *nix_hw,
> struct nix_cn10k_aq_enq_req *aq_req,
> struct nix_cn10k_aq_enq_rsp *aq_rsp,
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> index b15fd331facf..68525bfc8e6d 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> @@ -1751,6 +1751,9 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req,
> else
> rvu_npc_free_mcam_entries(rvu, pcifunc, nixlf);
>
> + /* Reset SPI to SA index table */
> + rvu_nix_free_spi_to_sa_table(rvu, pcifunc);
> +
> /* Free any tx vtag def entries used by this NIX LF */
> if (!(req->flags & NIX_LF_DONT_FREE_TX_VTAG))
> nix_free_tx_vtag_entries(rvu, pcifunc);
> @@ -5312,6 +5315,9 @@ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf)
> nix_rx_sync(rvu, blkaddr);
> nix_txschq_free(rvu, pcifunc);
>
> + /* Reset SPI to SA index table */
> + rvu_nix_free_spi_to_sa_table(rvu, pcifunc);
> +
> clear_bit(NIXLF_INITIALIZED, &pfvf->flags);
>
> if (is_pf_cgxmapped(rvu, pf) && rvu->rep_mode)
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
> new file mode 100644
> index 000000000000..b8acc23a47bc
> --- /dev/null
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
> @@ -0,0 +1,220 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Marvell RVU Admin Function driver
> + *
> + * Copyright (C) 2022 Marvell.
Copyright year 2025?
> + *
> + */
> +
> +#include "rvu.h"
> +
> +static bool nix_spi_to_sa_index_check_duplicate(struct rvu *rvu,
> + struct nix_spi_to_sa_add_req *req,
> + struct nix_spi_to_sa_add_rsp *rsp,
> + int blkaddr, int16_t index, u8 way,
> + bool *is_valid, int lfidx)
> +{
> + u32 spi_index;
> + u16 match_id;
> + bool valid;
> + u8 lfid;
> + u64 wkey;
Maintain RCT order while declaring variables
> +
> + wkey = rvu_read64(rvu, blkaddr, NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way));
> + spi_index = (wkey & 0xFFFFFFFF);
> + match_id = ((wkey >> 32) & 0xFFFF);
> + lfid = ((wkey >> 48) & 0x7f);
> + valid = ((wkey >> 55) & 0x1);
> +
> + *is_valid = valid;
> + if (!valid)
> + return 0;
> +
> + if (req->spi_index == spi_index && req->match_id == match_id &&
> + lfidx == lfid) {
> + rsp->hash_index = index;
> + rsp->way = way;
> + rsp->is_duplicate = true;
> + return 1;
> + }
> + return 0;
> +}
> +
> +static void nix_spi_to_sa_index_table_update(struct rvu *rvu,
> + struct nix_spi_to_sa_add_req *req,
> + struct nix_spi_to_sa_add_rsp *rsp,
> + int blkaddr, int16_t index, u8 way,
> + int lfidx)
> +{
> + u64 wvalue;
> + u64 wkey;
> +
> + wkey = (req->spi_index | ((u64)req->match_id << 32) |
> + (((u64)lfidx) << 48) | ((u64)req->valid << 55));
> + rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way),
> + wkey);
> + wvalue = (req->sa_index & 0xFFFFFFFF);
> + rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_VALUEX_WAYX(index, way),
> + wvalue);
> + rsp->hash_index = index;
> + rsp->way = way;
> + rsp->is_duplicate = false;
> +}
> +
> +int rvu_mbox_handler_nix_spi_to_sa_delete(struct rvu *rvu,
> + struct nix_spi_to_sa_delete_req *req,
> + struct msg_rsp *rsp)
> +{
> + struct rvu_hwinfo *hw = rvu->hw;
> + u16 pcifunc = req->hdr.pcifunc;
> + int lfidx, lfid;
> + int blkaddr;
> + u64 wvalue;
> + u64 wkey;
> + int ret = 0;
> +
> + if (!hw->cap.spi_to_sas)
> + return NIX_AF_ERR_PARAM;
> +
> + if (!is_nixlf_attached(rvu, pcifunc)) {
> + ret = NIX_AF_ERR_AF_LF_INVALID;
> + goto exit;
there is no need of label here, you can return directly
> + }
> +
> + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
> + lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
> + if (lfidx < 0) {
> + ret = NIX_AF_ERR_AF_LF_INVALID;
> + goto exit;
there is no need of label here, you can return directly
> + }
> +
> + mutex_lock(&rvu->rsrc_lock);
> +
> + wkey = rvu_read64(rvu, blkaddr,
> + NIX_AF_SPI_TO_SA_KEYX_WAYX(req->hash_index, req->way));
> + lfid = ((wkey >> 48) & 0x7f);
It would be nice if you use macros instead of these hard coded magic
numbers. Same comment applies to whole patch series.
> + if (lfid != lfidx) {
> + ret = NIX_AF_ERR_AF_LF_INVALID;
> + goto unlock;
> + }
> +
> + wkey = 0;
> + rvu_write64(rvu, blkaddr,
> + NIX_AF_SPI_TO_SA_KEYX_WAYX(req->hash_index, req->way), wkey);
> + wvalue = 0;
> + rvu_write64(rvu, blkaddr,
> + NIX_AF_SPI_TO_SA_VALUEX_WAYX(req->hash_index, req->way), wvalue);
> +unlock:
> + mutex_unlock(&rvu->rsrc_lock);
> +exit:
> + return ret;
> +}
> +
> +int rvu_mbox_handler_nix_spi_to_sa_add(struct rvu *rvu,
> + struct nix_spi_to_sa_add_req *req,
> + struct nix_spi_to_sa_add_rsp *rsp)
> +{
> + u16 way0_index, way1_index, way2_index, way3_index;
> + struct rvu_hwinfo *hw = rvu->hw;
> + u16 pcifunc = req->hdr.pcifunc;
> + bool way0, way1, way2, way3;
> + int ret = 0;
> + int blkaddr;
> + int lfidx;
> + u64 value;
> + u64 key;
> +
> + if (!hw->cap.spi_to_sas)
> + return NIX_AF_ERR_PARAM;
> +
> + if (!is_nixlf_attached(rvu, pcifunc)) {
> + ret = NIX_AF_ERR_AF_LF_INVALID;
> + goto exit;
there is no need of label here, you can return directly
> + }
> +
> + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
> + lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
> + if (lfidx < 0) {
> + ret = NIX_AF_ERR_AF_LF_INVALID;
> + goto exit;
there is no need of label here, you can return directly
> + }
> +
> + mutex_lock(&rvu->rsrc_lock);
> +
> + key = (((u64)lfidx << 48) | ((u64)req->match_id << 32) | req->spi_index);
> + rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_HASH_KEY, key);
> + value = rvu_read64(rvu, blkaddr, NIX_AF_SPI_TO_SA_HASH_VALUE);
> + way0_index = (value & 0x7ff);
> + way1_index = ((value >> 16) & 0x7ff);
> + way2_index = ((value >> 32) & 0x7ff);
> + way3_index = ((value >> 48) & 0x7ff);
> +
> + /* Check for duplicate entry */
> + if (nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
> + way0_index, 0, &way0, lfidx) ||
> + nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
> + way1_index, 1, &way1, lfidx) ||
> + nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
> + way2_index, 2, &way2, lfidx) ||
> + nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
> + way3_index, 3, &way3, lfidx)) {
> + ret = 0;
> + goto unlock;
> + }
> +
> + /* If not present, update first available way with index */
> + if (!way0)
> + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
> + way0_index, 0, lfidx);
> + else if (!way1)
> + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
> + way1_index, 1, lfidx);
> + else if (!way2)
> + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
> + way2_index, 2, lfidx);
> + else if (!way3)
> + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
> + way3_index, 3, lfidx);
> +unlock:
> + mutex_unlock(&rvu->rsrc_lock);
> +exit:
> + return ret;
> +}
> +
> +int rvu_nix_free_spi_to_sa_table(struct rvu *rvu, uint16_t pcifunc)
> +{
> + struct rvu_hwinfo *hw = rvu->hw;
> + int lfidx, lfid;
> + int index, way;
> + u64 value, key;
Maintain RCT order here
> + int blkaddr;
> +
> + if (!hw->cap.spi_to_sas)
> + return 0;
> +
> + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
> + lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
> + if (lfidx < 0)
> + return NIX_AF_ERR_AF_LF_INVALID;
> +
> + mutex_lock(&rvu->rsrc_lock);
> + for (index = 0; index < hw->cap.spi_to_sas / 4; index++) {
> + for (way = 0; way < 4; way++) {
> + key = rvu_read64(rvu, blkaddr,
> + NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way));
> + lfid = ((key >> 48) & 0x7f);
> + if (lfid == lfidx) {
> + key = 0;
> + rvu_write64(rvu, blkaddr,
> + NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way),
> + key);
> + value = 0;
> + rvu_write64(rvu, blkaddr,
> + NIX_AF_SPI_TO_SA_VALUEX_WAYX(index, way),
> + value);
> + }
> + }
> + }
> + mutex_unlock(&rvu->rsrc_lock);
> +
> + return 0;
> +}
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> index e5e005d5d71e..b64547fe4811 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> @@ -396,6 +396,10 @@
> #define NIX_AF_RX_CHANX_CFG(a) (0x1A30 | (a) << 15)
> #define NIX_AF_CINT_TIMERX(a) (0x1A40 | (a) << 18)
> #define NIX_AF_LSO_FORMATX_FIELDX(a, b) (0x1B00 | (a) << 16 | (b) << 3)
> +#define NIX_AF_SPI_TO_SA_KEYX_WAYX(a, b) (0x1C00 | (a) << 16 | (b) << 3)
> +#define NIX_AF_SPI_TO_SA_VALUEX_WAYX(a, b) (0x1C40 | (a) << 16 | (b) << 3)
> +#define NIX_AF_SPI_TO_SA_HASH_KEY (0x1C90)
> +#define NIX_AF_SPI_TO_SA_HASH_VALUE (0x1CA0)
> #define NIX_AF_LFX_CFG(a) (0x4000 | (a) << 17)
> #define NIX_AF_LFX_SQS_CFG(a) (0x4020 | (a) << 17)
> #define NIX_AF_LFX_TX_CFG2(a) (0x4028 | (a) << 17)
> --
> 2.43.0
>
>
--
Regards,
Kalesh AP
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4226 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
` (14 preceding siblings ...)
2025-05-02 13:19 ` [net-next PATCH v1 15/15] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows Tanmay Jagdale
@ 2025-05-05 17:52 ` Leon Romanovsky
2025-05-13 5:11 ` Tanmay Jagdale
15 siblings, 1 reply; 43+ messages in thread
From: Leon Romanovsky @ 2025-05-05 17:52 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian
On Fri, May 02, 2025 at 06:49:41PM +0530, Tanmay Jagdale wrote:
> This patch series adds support for inbound inline IPsec flows for the
> Marvell CN10K SoC.
It will be much easier if in commit messages and comments you
will use kernel naming, e.g. "IPsec packet offload" and not "inline IPsec", e.t.c.
Also, I'm wonder, do you have performance numbers for this code?
Thanks
>
> The packet flow
> ---------------
> An encrypted IPSec packet goes through two passes in the RVU hardware
> before reaching the CPU.
> First Pass:
> The first pass involves identifying the packet as IPSec, assigning an RQ,
> allocating a buffer from the Aura pool and then send it to CPT for decryption.
>
> Second Pass:
> After CPT decrypts the packet, it sends a metapacket to NIXRX via the X2P
> bus. The metapacket contains CPT_PARSE_HDR_S structure and some initial
> bytes of the decrypted packet which would help NIXRX in classification.
> CPT also sets BIT(11) of channel number to further help in identifcation.
> NIXRX allocates a new buffer for this packet and submits it to the CPU.
>
> Once the decrypted metapacket packet is delivered to the CPU, get the WQE
> pointer from CPT_PARSE_HDR_S in the packet buffer. This WQE points to the
> complete decrypted packet. We create an skb using this, set the relevant
> XFRM packet mode flags to indicate successful decryption, and submit it
> to the network stack.
>
>
> Patches are grouped as follows:
> -------------------------------
> 1) CPT LF movement from crypto driver to RVU AF
> 0001-crypto-octeontx2-Share-engine-group-info-with-AF-dri.patch
> 0002-octeontx2-af-Configure-crypto-hardware-for-inline-ip.patch
> 0003-octeontx2-af-Setup-Large-Memory-Transaction-for-cryp.patch
> 0004-octeontx2-af-Handle-inbound-inline-ipsec-config-in-A.patch
> 0005-crypto-octeontx2-Remove-inbound-inline-ipsec-config.patch
>
> 2) RVU AF Mailbox changes for CPT 2nd pass RQ mask, SPI-to-SA table,
> NIX-CPT BPID configuration
> 0006-octeontx2-af-Add-support-for-CPT-second-pass.patch
> 0007-octeontx2-af-Add-support-for-SPI-to-SA-index-transla.patch
> 0008-octeontx2-af-Add-mbox-to-alloc-free-BPIDs.patch
>
> 3) Inbound Inline IPsec support patches
> 0009-octeontx2-pf-ipsec-Allocate-Ingress-SA-table.patch
> 0010-octeontx2-pf-ipsec-Setup-NIX-HW-resources-for-inboun.patch
> 0011-octeontx2-pf-ipsec-Handle-NPA-threshhold-interrupt.patch
> 0012-octeontx2-pf-ipsec-Initialize-ingress-IPsec.patch
> 0013-octeontx2-pf-ipsec-Manage-NPC-rules-and-SPI-to-SA-ta.patch
> 0014-octeontx2-pf-ipsec-Process-CPT-metapackets.patch
> 0015-octeontx2-pf-ipsec-Add-XFRM-state-and-policy-hooks-f.patch
>
>
> Bharat Bhushan (5):
> crypto: octeontx2: Share engine group info with AF driver
> octeontx2-af: Configure crypto hardware for inline ipsec
> octeontx2-af: Setup Large Memory Transaction for crypto
> octeontx2-af: Handle inbound inline ipsec config in AF
> crypto: octeontx2: Remove inbound inline ipsec config
>
> Geetha sowjanya (1):
> octeontx2-af: Add mbox to alloc/free BPIDs
>
> Kiran Kumar K (1):
> octeontx2-af: Add support for SPI to SA index translation
>
> Rakesh Kudurumalla (1):
> octeontx2-af: Add support for CPT second pass
>
> Tanmay Jagdale (7):
> octeontx2-pf: ipsec: Allocate Ingress SA table
> octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows
> octeontx2-pf: ipsec: Handle NPA threshold interrupt
> octeontx2-pf: ipsec: Initialize ingress IPsec
> octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries
> octeontx2-pf: ipsec: Process CPT metapackets
> octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows
>
> .../marvell/octeontx2/otx2_cpt_common.h | 8 -
> drivers/crypto/marvell/octeontx2/otx2_cptpf.h | 10 -
> .../marvell/octeontx2/otx2_cptpf_main.c | 50 +-
> .../marvell/octeontx2/otx2_cptpf_mbox.c | 286 +---
> .../marvell/octeontx2/otx2_cptpf_ucode.c | 116 +-
> .../marvell/octeontx2/otx2_cptpf_ucode.h | 3 +-
> .../ethernet/marvell/octeontx2/af/Makefile | 2 +-
> .../ethernet/marvell/octeontx2/af/common.h | 1 +
> .../net/ethernet/marvell/octeontx2/af/mbox.h | 119 +-
> .../net/ethernet/marvell/octeontx2/af/rvu.c | 9 +-
> .../net/ethernet/marvell/octeontx2/af/rvu.h | 71 +
> .../ethernet/marvell/octeontx2/af/rvu_cn10k.c | 11 +
> .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 706 +++++++++-
> .../ethernet/marvell/octeontx2/af/rvu_cpt.h | 71 +
> .../ethernet/marvell/octeontx2/af/rvu_nix.c | 230 +++-
> .../marvell/octeontx2/af/rvu_nix_spi.c | 220 +++
> .../ethernet/marvell/octeontx2/af/rvu_reg.h | 16 +
> .../marvell/octeontx2/af/rvu_struct.h | 4 +-
> .../marvell/octeontx2/nic/cn10k_ipsec.c | 1191 ++++++++++++++++-
> .../marvell/octeontx2/nic/cn10k_ipsec.h | 152 +++
> .../marvell/octeontx2/nic/otx2_common.c | 23 +-
> .../marvell/octeontx2/nic/otx2_common.h | 16 +
> .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 17 +
> .../marvell/octeontx2/nic/otx2_struct.h | 16 +
> .../marvell/octeontx2/nic/otx2_txrx.c | 25 +-
> .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 4 +
> 26 files changed, 2915 insertions(+), 462 deletions(-)
> create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
> create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
>
> --
> 2.43.0
>
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 02/15] octeontx2-af: Configure crypto hardware for inline ipsec
2025-05-02 13:19 ` [net-next PATCH v1 02/15] octeontx2-af: Configure crypto hardware for inline ipsec Tanmay Jagdale
@ 2025-05-06 20:24 ` Simon Horman
2025-05-08 10:56 ` Bharat Bhushan
0 siblings, 1 reply; 43+ messages in thread
From: Simon Horman @ 2025-05-06 20:24 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian
On Fri, May 02, 2025 at 06:49:43PM +0530, Tanmay Jagdale wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
>
> Currently cpt_rx_inline_lf_cfg mailbox is handled by CPT PF
> driver to configures inbound inline ipsec. Ideally inbound
> inline ipsec configuration should be done by AF driver.
>
> This patch adds support to allocate, attach and initialize
> a cptlf from AF. It also configures NIX to send CPT instruction
> if the packet needs inline ipsec processing and configures
> CPT LF to handle inline inbound instruction received from NIX.
>
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
Hi Bharat and Tanmay,
Some minor feedback from my side.
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> index 973ff5cf1a7d..8540a04a92f9 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> @@ -1950,6 +1950,20 @@ enum otx2_cpt_eng_type {
> OTX2_CPT_MAX_ENG_TYPES,
> };
>
> +struct cpt_rx_inline_lf_cfg_msg {
> + struct mbox_msghdr hdr;
> + u16 sso_pf_func;
> + u16 param1;
> + u16 param2;
> + u16 opcode;
> + u32 credit;
> + u32 credit_th;
> + u16 bpid;
On arm64 (at least) there will be a 2 byte hole here. Is that intended?
And, not strictly related to this patch, struct mboxhdr also has
a 2 byte hole before it's rc member. Perhaps would be nice
if it was it filled by a reserved member?
> + u32 reserved;
> + u8 ctx_ilen_valid : 1;
> + u8 ctx_ilen : 7;
> +};
> +
> struct cpt_set_egrp_num {
> struct mbox_msghdr hdr;
> bool set;
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> index fa403da555ff..6923fd756b19 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> @@ -525,8 +525,38 @@ struct rvu_cpt_eng_grp {
> u8 grp_num;
> };
>
> +struct rvu_cpt_rx_inline_lf_cfg {
> + u16 sso_pf_func;
> + u16 param1;
> + u16 param2;
> + u16 opcode;
> + u32 credit;
> + u32 credit_th;
> + u16 bpid;
FWIIW, there is a hole here too.
> + u32 reserved;
> + u8 ctx_ilen_valid : 1;
> + u8 ctx_ilen : 7;
> +};
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
...
> @@ -1087,6 +1115,72 @@ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr)
> #define DQPTR GENMASK_ULL(19, 0)
> #define NQPTR GENMASK_ULL(51, 32)
>
> +static void cpt_rx_ipsec_lf_enable_iqueue(struct rvu *rvu, int blkaddr,
> + int slot)
> +{
> + u64 val;
> +
> + /* Set Execution Enable of instruction queue */
> + val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG);
> + val |= BIT_ULL(16);
Bit 16 seems to have a meaning, it would be nice if a #define was used
I mean something like this (but probably not actually this :)
#define CPT_LF_INPROG_ENA_QUEUE BIT_ULL(16)
Perhaps defined near where CPT_LF_INPROG is defined.
> + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG, val);
> +
> + /* Set iqueue's enqueuing */
> + val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL);
> + val |= BIT_ULL(0);
Ditto.
> + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL, val);
> +}
> +
> +static void cpt_rx_ipsec_lf_disable_iqueue(struct rvu *rvu, int blkaddr,
> + int slot)
> +{
> + int timeout = 1000000;
> + u64 inprog, inst_ptr;
> + u64 qsize, pending;
> + int i = 0;
> +
> + /* Disable instructions enqueuing */
> + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL, 0x0);
> +
> + inprog = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG);
> + inprog |= BIT_ULL(16);
> + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG, inprog);
> +
> + qsize = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_SIZE)
> + & 0x7FFF;
> + do {
> + inst_ptr = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot,
> + CPT_LF_Q_INST_PTR);
> + pending = (FIELD_GET(XQ_XOR, inst_ptr) * qsize * 40) +
> + FIELD_GET(NQPTR, inst_ptr) -
> + FIELD_GET(DQPTR, inst_ptr);
nit: I don't think you need the outer parentheses here.
But if you do, the two lines above sould be indented by one more
character.
> + udelay(1);
> + timeout--;
> + } while ((pending != 0) && (timeout != 0));
nit: I don't think you need the inner parenthese here (x2).
> +
> + if (timeout == 0)
> + dev_warn(rvu->dev, "TIMEOUT: CPT poll on pending instructions\n");
> +
> + timeout = 1000000;
> + /* Wait for CPT queue to become execution-quiescent */
> + do {
> + inprog = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot,
> + CPT_LF_INPROG);
> + if ((FIELD_GET(INFLIGHT, inprog) == 0) &&
> + (FIELD_GET(GRB_CNT, inprog) == 0)) {
> + i++;
> + } else {
> + i = 0;
> + timeout--;
> + }
> + } while ((timeout != 0) && (i < 10));
> +
> + if (timeout == 0)
> + dev_warn(rvu->dev, "TIMEOUT: CPT poll on inflight count\n");
> + /* Wait for 2 us to flush all queue writes to memory */
> + udelay(2);
> +}
> +
> static void cpt_lf_disable_iqueue(struct rvu *rvu, int blkaddr, int slot)
> {
> int timeout = 1000000;
> @@ -1310,6 +1404,474 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc)
> return 0;
> }
>
> +static irqreturn_t rvu_cpt_rx_ipsec_misc_intr_handler(int irq, void *ptr)
> +{
> + struct rvu_block *block = ptr;
> + struct rvu *rvu = block->rvu;
> + int blkaddr = block->addr;
> + struct device *dev = rvu->dev;
> + int slot = 0;
> + u64 val;
> +
> + val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_MISC_INT);
> +
> + if (val & (1 << 6)) {
Allong the lines of my earlier comment, bit 6 seems to have a meaning too.
Likewise for other bits below.
> + dev_err(dev, "Memory error detected while executing CPT_INST_S, LF %d.\n",
> + slot);
> + } else if (val & (1 << 5)) {
> + dev_err(dev, "HW error from an engine executing CPT_INST_S, LF %d.",
> + slot);
> + } else if (val & (1 << 3)) {
> + dev_err(dev, "SMMU fault while writing CPT_RES_S to CPT_INST_S[RES_ADDR], LF %d.\n",
> + slot);
> + } else if (val & (1 << 2)) {
> + dev_err(dev, "Memory error when accessing instruction memory queue CPT_LF_Q_BASE[ADDR].\n");
> + } else if (val & (1 << 1)) {
> + dev_err(dev, "Error enqueuing an instruction received at CPT_LF_NQ.\n");
> + } else {
> + dev_err(dev, "Unhandled interrupt in CPT LF %d\n", slot);
> + return IRQ_NONE;
> + }
> +
> + /* Acknowledge interrupts */
> + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_MISC_INT,
> + val & CPT_LF_MISC_INT_MASK);
> +
> + return IRQ_HANDLED;
> +}
...
> +/* Allocate memory for CPT outbound Instruction queue.
> + * Instruction queue memory format is:
> + * -----------------------------
> + * | Instruction Group memory |
> + * | (CPT_LF_Q_SIZE[SIZE_DIV40] |
> + * | x 16 Bytes) |
> + * | |
> + * ----------------------------- <-- CPT_LF_Q_BASE[ADDR]
> + * | Flow Control (128 Bytes) |
> + * | |
> + * -----------------------------
> + * | Instruction Memory |
> + * | (CPT_LF_Q_SIZE[SIZE_DIV40] |
> + * | × 40 × 64 bytes) |
> + * | |
> + * -----------------------------
> + */
Nice diagram :)
...
> +static int rvu_rx_cpt_set_grp_pri_ilen(struct rvu *rvu, int blkaddr, int cptlf)
> +{
> + u64 reg_val;
> +
> + reg_val = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf));
> + /* Set High priority */
> + reg_val |= 1;
> + /* Set engine group */
> + reg_val |= ((1ULL << rvu->rvu_cpt.inline_ipsec_egrp) << 48);
> + /* Set ilen if valid */
> + if (rvu->rvu_cpt.rx_cfg.ctx_ilen_valid)
> + reg_val |= rvu->rvu_cpt.rx_cfg.ctx_ilen << 17;
Along the same lines. 48 and 17 seem to have meaning.
Perhaps define appropriate masks created using GENMASK_ULL
and use FIELD_PREP?
> +
> + rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), reg_val);
> + return 0;
> +}
...
> +static void rvu_rx_cptlf_cleanup(struct rvu *rvu, int blkaddr, int slot)
> +{
> + /* IRQ cleanup */
> + rvu_cpt_rx_inline_cleanup_irq(rvu, blkaddr, slot);
> +
> + /* CPTLF cleanup */
> + rvu_cpt_rx_inline_cptlf_clean(rvu, blkaddr, slot);
> +}
> +
> +int rvu_mbox_handler_cpt_rx_inline_lf_cfg(struct rvu *rvu,
> + struct cpt_rx_inline_lf_cfg_msg *req,
> + struct msg_rsp *rsp)
Compilers warn that rvu_mbox_handler_cpt_rx_inline_lf_cfg doesn't have
a prototype.
I think this can be resolved by squashing the following hunk,
which appears in a subsequent patch in this series, into this patch.
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 8540a04a92f9..ad74a27888da 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -213,6 +213,8 @@ M(CPT_FLT_ENG_INFO, 0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req, \
cpt_flt_eng_info_rsp) \
M(CPT_SET_ENG_GRP_NUM, 0xA0A, cpt_set_eng_grp_num, cpt_set_egrp_num, \
msg_rsp) \
+M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, cpt_rx_inline_lf_cfg_msg, \
+ msg_rsp) \
/* SDP mbox IDs (range 0x1000 - 0x11FF) */ \
M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \
M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
...
> +/* CPT instruction queue length in bytes */
> +#define RVU_CPT_INST_QLEN_BYTES \
> + ((RVU_CPT_SIZE_DIV40 * 40 * RVU_CPT_INST_SIZE) + \
> + RVU_CPT_INST_QLEN_EXTRA_BYTES)
nit: I think the line above should be indented by one more character
> +
> +/* CPT instruction group queue length in bytes */
> +#define RVU_CPT_INST_GRP_QLEN_BYTES \
> + ((RVU_CPT_SIZE_DIV40 + RVU_CPT_EXTRA_SIZE_DIV40) * 16)
> +
> +/* CPT FC length in bytes */
> +#define RVU_CPT_Q_FC_LEN 128
> +
> +/* CPT LF_Q_SIZE Register */
> +#define CPT_LF_Q_SIZE_DIV40 GENMASK_ULL(14, 0)
> +
> +/* CPT invalid engine group num */
> +#define OTX2_CPT_INVALID_CRYPTO_ENG_GRP 0xFF
> +
> +/* Fastpath ipsec opcode with inplace processing */
> +#define OTX2_CPT_INLINE_RX_OPCODE (0x26 | (1 << 6))
> +#define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6))
Along the lines of earlier comments, bit 6 seems to have a meaning here.
> +
> +/* Calculate CPT register offset */
> +#define CPT_RVU_FUNC_ADDR_S(blk, slot, offs) \
> + (((blk) << 20) | ((slot) << 12) | (offs))
And perhaps this is another candidate for GENMASK + FIELD_PREP.
...
^ permalink raw reply related [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 15/15] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows
2025-05-02 13:19 ` [net-next PATCH v1 15/15] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows Tanmay Jagdale
@ 2025-05-07 6:42 ` kernel test robot
2025-05-07 18:31 ` Simon Horman
1 sibling, 0 replies; 43+ messages in thread
From: kernel test robot @ 2025-05-07 6:42 UTC (permalink / raw)
To: Tanmay Jagdale, bbrezillon, arno, schalla, herbert, davem,
sgoutham, lcherian, gakula, jerinj, hkelam, sbhatta,
andrew+netdev, edumazet, kuba, pabeni, bbhushan2, bhelgaas,
pstanner, gregkh, peterz, linux, krzysztof.kozlowski,
giovanni.cabiddu
Cc: oe-kbuild-all, linux-crypto, linux-kernel, netdev, rkannoth,
sumang, gcherian, Tanmay Jagdale
Hi Tanmay,
kernel test robot noticed the following build errors:
[auto build test ERROR on net-next/main]
url: https://github.com/intel-lab-lkp/linux/commits/Tanmay-Jagdale/crypto-octeontx2-Share-engine-group-info-with-AF-driver/20250502-213203
base: net-next/main
patch link: https://lore.kernel.org/r/20250502132005.611698-16-tanmay%40marvell.com
patch subject: [net-next PATCH v1 15/15] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows
config: loongarch-allmodconfig (https://download.01.org/0day-ci/archive/20250507/202505071416.T1HY7g1G-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250507/202505071416.T1HY7g1G-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505071416.T1HY7g1G-lkp@intel.com/
All errors (new ones prefixed by >>):
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c: In function 'cn10k_ipsec_npa_refill_inb_ipsecq':
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:868:27: warning: variable 'qset' set but not used [-Wunused-but-set-variable]
868 | struct otx2_qset *qset = NULL;
| ^~~~
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c: In function 'cn10k_inb_cpt_init':
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:932:15: warning: variable 'ptr' set but not used [-Wunused-but-set-variable]
932 | void *ptr;
| ^~~
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c: In function 'cn10k_inb_write_sa':
>> drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:1214:9: error: implicit declaration of function 'dmb'; did you mean 'rmb'? [-Wimplicit-function-declaration]
1214 | dmb(sy);
| ^~~
| rmb
>> drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:1214:13: error: 'sy' undeclared (first use in this function); did you mean 's8'?
1214 | dmb(sy);
| ^~
| s8
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:1214:13: note: each undeclared identifier is reported only once for each function it appears in
vim +1214 drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
1164
1165 static int cn10k_inb_write_sa(struct otx2_nic *pf,
1166 struct xfrm_state *x,
1167 struct cn10k_inb_sw_ctx_info *inb_ctx_info)
1168 {
1169 dma_addr_t res_iova, dptr_iova, sa_iova;
1170 struct cn10k_rx_sa_s *sa_dptr, *sa_cptr;
1171 struct cpt_inst_s inst;
1172 u32 sa_size, off;
1173 struct cpt_res_s *res;
1174 u64 reg_val;
1175 int ret;
1176
1177 res = dma_alloc_coherent(pf->dev, sizeof(struct cpt_res_s),
1178 &res_iova, GFP_ATOMIC);
1179 if (!res)
1180 return -ENOMEM;
1181
1182 sa_cptr = inb_ctx_info->sa_entry;
1183 sa_iova = inb_ctx_info->sa_iova;
1184 sa_size = sizeof(struct cn10k_rx_sa_s);
1185
1186 sa_dptr = dma_alloc_coherent(pf->dev, sa_size, &dptr_iova, GFP_ATOMIC);
1187 if (!sa_dptr) {
1188 dma_free_coherent(pf->dev, sizeof(struct cpt_res_s), res,
1189 res_iova);
1190 return -ENOMEM;
1191 }
1192
1193 for (off = 0; off < (sa_size / 8); off++)
1194 *((u64 *)sa_dptr + off) = cpu_to_be64(*((u64 *)sa_cptr + off));
1195
1196 memset(&inst, 0, sizeof(struct cpt_inst_s));
1197
1198 res->compcode = 0;
1199 inst.res_addr = res_iova;
1200 inst.dptr = (u64)dptr_iova;
1201 inst.param2 = sa_size >> 3;
1202 inst.dlen = sa_size;
1203 inst.opcode_major = CN10K_IPSEC_MAJOR_OP_WRITE_SA;
1204 inst.opcode_minor = CN10K_IPSEC_MINOR_OP_WRITE_SA;
1205 inst.cptr = sa_iova;
1206 inst.ctx_val = 1;
1207 inst.egrp = CN10K_DEF_CPT_IPSEC_EGRP;
1208
1209 /* Re-use Outbound CPT LF to install Ingress SAs as well because
1210 * the driver does not own the ingress CPT LF.
1211 */
1212 pf->ipsec.io_addr = (__force u64)otx2_get_regaddr(pf, CN10K_CPT_LF_NQX(0));
1213 cn10k_cpt_inst_flush(pf, &inst, sizeof(struct cpt_inst_s));
> 1214 dmb(sy);
1215
1216 ret = cn10k_wait_for_cpt_respose(pf, res);
1217 if (ret)
1218 goto out;
1219
1220 /* Trigger CTX flush to write dirty data back to DRAM */
1221 reg_val = FIELD_PREP(GENMASK_ULL(45, 0), sa_iova >> 7);
1222 otx2_write64(pf, CN10K_CPT_LF_CTX_FLUSH, reg_val);
1223
1224 out:
1225 dma_free_coherent(pf->dev, sa_size, sa_dptr, dptr_iova);
1226 dma_free_coherent(pf->dev, sizeof(struct cpt_res_s), res, res_iova);
1227 return ret;
1228 }
1229
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass
2025-05-02 13:19 ` [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass Tanmay Jagdale
@ 2025-05-07 7:58 ` kernel test robot
2025-05-07 12:36 ` Simon Horman
1 sibling, 0 replies; 43+ messages in thread
From: kernel test robot @ 2025-05-07 7:58 UTC (permalink / raw)
To: Tanmay Jagdale, bbrezillon, arno, schalla, herbert, davem,
sgoutham, lcherian, gakula, jerinj, hkelam, sbhatta,
andrew+netdev, edumazet, kuba, pabeni, bbhushan2, bhelgaas,
pstanner, gregkh, peterz, linux, krzysztof.kozlowski,
giovanni.cabiddu
Cc: llvm, oe-kbuild-all, linux-crypto, linux-kernel, netdev, rkannoth,
sumang, gcherian, Rakesh Kudurumalla
Hi Tanmay,
kernel test robot noticed the following build warnings:
[auto build test WARNING on net-next/main]
url: https://github.com/intel-lab-lkp/linux/commits/Tanmay-Jagdale/crypto-octeontx2-Share-engine-group-info-with-AF-driver/20250502-213203
base: net-next/main
patch link: https://lore.kernel.org/r/20250502132005.611698-7-tanmay%40marvell.com
patch subject: [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass
config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20250507/202505071511.neU9Siwr-lkp@intel.com/config)
compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250507/202505071511.neU9Siwr-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505071511.neU9Siwr-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c:6723:6: warning: variable 'rq_mask' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
6723 | if (req->ipsec_cfg1.rq_mask_enable) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c:6729:41: note: uninitialized use occurs here
6729 | configure_rq_mask(rvu, blkaddr, nixlf, rq_mask,
| ^~~~~~~
drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c:6723:2: note: remove the 'if' if its condition is always true
6723 | if (req->ipsec_cfg1.rq_mask_enable) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c:6710:13: note: initialize the variable 'rq_mask' to silence this warning
6710 | int rq_mask, err;
| ^
| = 0
1 warning generated.
vim +6723 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
6702
6703 int rvu_mbox_handler_nix_lf_inline_rq_cfg(struct rvu *rvu,
6704 struct nix_rq_cpt_field_mask_cfg_req *req,
6705 struct msg_rsp *rsp)
6706 {
6707 struct rvu_hwinfo *hw = rvu->hw;
6708 struct nix_hw *nix_hw;
6709 int blkaddr, nixlf;
6710 int rq_mask, err;
6711
6712 err = nix_get_nixlf(rvu, req->hdr.pcifunc, &nixlf, &blkaddr);
6713 if (err)
6714 return err;
6715
6716 nix_hw = get_nix_hw(rvu->hw, blkaddr);
6717 if (!nix_hw)
6718 return NIX_AF_ERR_INVALID_NIXBLK;
6719
6720 if (!hw->cap.second_cpt_pass)
6721 return NIX_AF_ERR_INVALID_NIXBLK;
6722
> 6723 if (req->ipsec_cfg1.rq_mask_enable) {
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 04/15] octeontx2-af: Handle inbound inline ipsec config in AF
2025-05-02 13:19 ` [net-next PATCH v1 04/15] octeontx2-af: Handle inbound inline ipsec config in AF Tanmay Jagdale
@ 2025-05-07 9:19 ` Simon Horman
2025-05-07 9:28 ` Simon Horman
0 siblings, 1 reply; 43+ messages in thread
From: Simon Horman @ 2025-05-07 9:19 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian
On Fri, May 02, 2025 at 06:49:45PM +0530, Tanmay Jagdale wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
>
> Now CPT context flush can be handled in AF as CPT LF
> can be attached to it. With that AF driver can completely
> handle inbound inline ipsec configuration mailbox, so
> forward this mailbox to AF driver.
>
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
> ---
> .../marvell/octeontx2/otx2_cpt_common.h | 1 -
> .../marvell/octeontx2/otx2_cptpf_mbox.c | 3 -
> .../net/ethernet/marvell/octeontx2/af/mbox.h | 2 +
> .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 67 +++++++++----------
> .../ethernet/marvell/octeontx2/af/rvu_reg.h | 1 +
> 5 files changed, 34 insertions(+), 40 deletions(-)
>
> diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
> index df735eab8f08..27a2dd997f73 100644
> --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
> +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
> @@ -33,7 +33,6 @@
> #define BAD_OTX2_CPT_ENG_TYPE OTX2_CPT_MAX_ENG_TYPES
>
> /* Take mbox id from end of CPT mbox range in AF (range 0xA00 - 0xBFF) */
> -#define MBOX_MSG_RX_INLINE_IPSEC_LF_CFG 0xBFE
> #define MBOX_MSG_GET_ENG_GRP_NUM 0xBFF
> #define MBOX_MSG_GET_CAPS 0xBFD
> #define MBOX_MSG_GET_KVF_LIMITS 0xBFC
> diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
> index 5e6f70ac35a7..222419bd5ac9 100644
> --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
> +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
> @@ -326,9 +326,6 @@ static int cptpf_handle_vf_req(struct otx2_cptpf_dev *cptpf,
> case MBOX_MSG_GET_KVF_LIMITS:
> err = handle_msg_kvf_limits(cptpf, vf, req);
> break;
> - case MBOX_MSG_RX_INLINE_IPSEC_LF_CFG:
> - err = handle_msg_rx_inline_ipsec_lf_cfg(cptpf, req);
> - break;
>
> default:
> err = forward_to_af(cptpf, vf, req, size);
This removes the only caller of handle_msg_rx_inline_ipsec_lf_cfg()
Which in turn removes the only caller of rx_inline_ipsec_lf_cfg(),
and in turn send_inline_ipsec_inbound_msg().
Those functions should be removed by the same patch that makes the changes
above. Which I think could be split into a separate patch from the changes
below.
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
...
> @@ -1253,20 +1258,36 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s
> return 0;
> }
>
> +static void cn10k_cpt_inst_flush(struct rvu *rvu, u64 *inst, u64 size)
> +{
> + u64 val = 0, tar_addr = 0;
> + void __iomem *io_addr;
> + u64 blkaddr = BLKADDR_CPT0;
nit: Please use reverse xmas tree order - longest line to shortest -
for local variable declarations in new Networking code.
Edward Cree's tool can be useful here.
https://github.com/ecree-solarflare/xmastree/
> +
> + io_addr = rvu->pfreg_base + CPT_RVU_FUNC_ADDR_S(blkaddr, 0, CPT_LF_NQX);
> +
> + /* Target address for LMTST flush tells HW how many 128bit
> + * words are present.
> + * tar_addr[6:4] size of first LMTST - 1 in units of 128b.
> + */
> + tar_addr |= (__force u64)io_addr | (((size / 16) - 1) & 0x7) << 4;
I see this pattern elsewhere. But, FWIIW, I don't think it
is entirely desirable to:
1) Cast away the __iomem annotation
2) Treat a u64 as an (io?) address
3) Open code the calculation of tar_addr, which
also seems to appear in several other places.
If these things are really necessary then I would
put them in some combination of cn10k_lmt_flush(),
helpers, and wrappers.
But as this is consistent with code elsewhere,
perhaps that is a topic for another time.
> + dma_wmb();
> + memcpy((u64 *)rvu->rvu_cpt.lmt_addr, inst, size);
FWIIW, I'm not sure that treating a u64 (the type of lmt_addr) as
an address is best either.
> + cn10k_lmt_flush(val, tar_addr);
> + dma_wmb();
> +}
> +
> #define CPT_RES_LEN 16
> #define CPT_SE_IE_EGRP 1ULL
>
> static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
> int nix_blkaddr)
> {
> - int cpt_pf_num = rvu->cpt_pf_num;
> - struct cpt_inst_lmtst_req *req;
> dma_addr_t res_daddr;
> int timeout = 3000;
> u8 cpt_idx;
> - u64 *inst;
> + u64 inst[8];
> u16 *res;
> - int rc;
nit: reverse xmas tree here too.
>
> res = kzalloc(CPT_RES_LEN, GFP_KERNEL);
> if (!res)
...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 04/15] octeontx2-af: Handle inbound inline ipsec config in AF
2025-05-07 9:19 ` Simon Horman
@ 2025-05-07 9:28 ` Simon Horman
2025-05-13 6:08 ` Tanmay Jagdale
0 siblings, 1 reply; 43+ messages in thread
From: Simon Horman @ 2025-05-07 9:28 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian
On Wed, May 07, 2025 at 10:19:18AM +0100, Simon Horman wrote:
> On Fri, May 02, 2025 at 06:49:45PM +0530, Tanmay Jagdale wrote:
> > From: Bharat Bhushan <bbhushan2@marvell.com>
...
> > diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
> > index 5e6f70ac35a7..222419bd5ac9 100644
> > --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
> > +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
> > @@ -326,9 +326,6 @@ static int cptpf_handle_vf_req(struct otx2_cptpf_dev *cptpf,
> > case MBOX_MSG_GET_KVF_LIMITS:
> > err = handle_msg_kvf_limits(cptpf, vf, req);
> > break;
> > - case MBOX_MSG_RX_INLINE_IPSEC_LF_CFG:
> > - err = handle_msg_rx_inline_ipsec_lf_cfg(cptpf, req);
> > - break;
> >
> > default:
> > err = forward_to_af(cptpf, vf, req, size);
>
> This removes the only caller of handle_msg_rx_inline_ipsec_lf_cfg()
> Which in turn removes the only caller of rx_inline_ipsec_lf_cfg(),
> and in turn send_inline_ipsec_inbound_msg().
>
> Those functions should be removed by the same patch that makes the changes
> above. Which I think could be split into a separate patch from the changes
> below.
Sorry for not noticing before I sent my previous email,
but I now see that those functions are removed by the following patch.
But I do think this needs to be re-arranged a bit to avoid regressions
wrt W=1 builds.
...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows
2025-05-02 13:19 ` [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows Tanmay Jagdale
@ 2025-05-07 10:03 ` kernel test robot
2025-05-07 13:46 ` Simon Horman
1 sibling, 0 replies; 43+ messages in thread
From: kernel test robot @ 2025-05-07 10:03 UTC (permalink / raw)
To: Tanmay Jagdale, bbrezillon, arno, schalla, herbert, davem,
sgoutham, lcherian, gakula, jerinj, hkelam, sbhatta,
andrew+netdev, edumazet, kuba, pabeni, bbhushan2, bhelgaas,
pstanner, gregkh, peterz, linux, krzysztof.kozlowski,
giovanni.cabiddu
Cc: llvm, oe-kbuild-all, linux-crypto, linux-kernel, netdev, rkannoth,
sumang, gcherian, Tanmay Jagdale
Hi Tanmay,
kernel test robot noticed the following build warnings:
[auto build test WARNING on net-next/main]
url: https://github.com/intel-lab-lkp/linux/commits/Tanmay-Jagdale/crypto-octeontx2-Share-engine-group-info-with-AF-driver/20250502-213203
base: net-next/main
patch link: https://lore.kernel.org/r/20250502132005.611698-11-tanmay%40marvell.com
patch subject: [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows
config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20250507/202505071739.xTGCCtUx-lkp@intel.com/config)
compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250507/202505071739.xTGCCtUx-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505071739.xTGCCtUx-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:488:6: warning: variable 'pool' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
488 | if (err)
| ^~~
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:512:23: note: uninitialized use occurs here
512 | qmem_free(pfvf->dev, pool->stack);
| ^~~~
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:488:2: note: remove the 'if' if its condition is always false
488 | if (err)
| ^~~~~~~~
489 | goto pool_fail;
| ~~~~~~~~~~~~~~
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:466:24: note: initialize the variable 'pool' to silence this warning
466 | struct otx2_pool *pool;
| ^
| = NULL
1 warning generated.
vim +488 drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
461
462 static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf)
463 {
464 struct otx2_hw *hw = &pfvf->hw;
465 int stack_pages, pool_id;
466 struct otx2_pool *pool;
467 int err, ptr, num_ptrs;
468 dma_addr_t bufptr;
469
470 num_ptrs = 256;
471 pool_id = pfvf->ipsec.inb_ipsec_pool;
472 stack_pages = (num_ptrs + hw->stack_pg_ptrs - 1) / hw->stack_pg_ptrs;
473
474 mutex_lock(&pfvf->mbox.lock);
475
476 /* Initialize aura context */
477 err = cn10k_ipsec_ingress_aura_init(pfvf, pool_id, pool_id, num_ptrs);
478 if (err)
479 goto fail;
480
481 /* Initialize pool */
482 err = otx2_pool_init(pfvf, pool_id, stack_pages, num_ptrs, pfvf->rbsize, AURA_NIX_RQ);
483 if (err)
484 goto fail;
485
486 /* Flush accumulated messages */
487 err = otx2_sync_mbox_msg(&pfvf->mbox);
> 488 if (err)
489 goto pool_fail;
490
491 /* Allocate pointers and free them to aura/pool */
492 pool = &pfvf->qset.pool[pool_id];
493 for (ptr = 0; ptr < num_ptrs; ptr++) {
494 err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr);
495 if (err) {
496 err = -ENOMEM;
497 goto pool_fail;
498 }
499 pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr + OTX2_HEAD_ROOM);
500 }
501
502 /* Initialize RQ and map buffers from pool_id */
503 err = cn10k_ipsec_ingress_rq_init(pfvf, pfvf->ipsec.inb_ipsec_rq, pool_id);
504 if (err)
505 goto pool_fail;
506
507 mutex_unlock(&pfvf->mbox.lock);
508 return 0;
509
510 pool_fail:
511 mutex_unlock(&pfvf->mbox.lock);
512 qmem_free(pfvf->dev, pool->stack);
513 qmem_free(pfvf->dev, pool->fc_addr);
514 page_pool_destroy(pool->page_pool);
515 devm_kfree(pfvf->dev, pool->xdp);
516 pool->xsk_pool = NULL;
517 fail:
518 otx2_mbox_reset(&pfvf->mbox.mbox, 0);
519 return err;
520 }
521
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 11/15] octeontx2-pf: ipsec: Handle NPA threshold interrupt
2025-05-02 13:19 ` [net-next PATCH v1 11/15] octeontx2-pf: ipsec: Handle NPA threshold interrupt Tanmay Jagdale
@ 2025-05-07 12:04 ` kernel test robot
2025-05-07 14:20 ` Simon Horman
1 sibling, 0 replies; 43+ messages in thread
From: kernel test robot @ 2025-05-07 12:04 UTC (permalink / raw)
To: Tanmay Jagdale, bbrezillon, arno, schalla, herbert, davem,
sgoutham, lcherian, gakula, jerinj, hkelam, sbhatta,
andrew+netdev, edumazet, kuba, pabeni, bbhushan2, bhelgaas,
pstanner, gregkh, linux, krzysztof.kozlowski, giovanni.cabiddu
Cc: llvm, oe-kbuild-all, linux-crypto, linux-kernel, netdev, rkannoth,
sumang, gcherian, Tanmay Jagdale
Hi Tanmay,
kernel test robot noticed the following build warnings:
[auto build test WARNING on net-next/main]
url: https://github.com/intel-lab-lkp/linux/commits/Tanmay-Jagdale/crypto-octeontx2-Share-engine-group-info-with-AF-driver/20250502-213203
base: net-next/main
patch link: https://lore.kernel.org/r/20250502132005.611698-12-tanmay%40marvell.com
patch subject: [net-next PATCH v1 11/15] octeontx2-pf: ipsec: Handle NPA threshold interrupt
config: x86_64-allyesconfig (https://download.01.org/0day-ci/archive/20250507/202505071904.TWc5095k-lkp@intel.com/config)
compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250507/202505071904.TWc5095k-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505071904.TWc5095k-lkp@intel.com/
All warnings (new ones prefixed by >>):
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:488:6: warning: variable 'pool' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
488 | if (err)
| ^~~
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:512:23: note: uninitialized use occurs here
512 | qmem_free(pfvf->dev, pool->stack);
| ^~~~
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:488:2: note: remove the 'if' if its condition is always false
488 | if (err)
| ^~~~~~~~
489 | goto pool_fail;
| ~~~~~~~~~~~~~~
drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:466:24: note: initialize the variable 'pool' to silence this warning
466 | struct otx2_pool *pool;
| ^
| = NULL
>> drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:528:20: warning: variable 'qset' set but not used [-Wunused-but-set-variable]
528 | struct otx2_qset *qset = NULL;
| ^
>> drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c:591:8: warning: variable 'ptr' set but not used [-Wunused-but-set-variable]
591 | void *ptr;
| ^
3 warnings generated.
vim +/qset +528 drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
521
522 static void cn10k_ipsec_npa_refill_inb_ipsecq(struct work_struct *work)
523 {
524 struct cn10k_ipsec *ipsec = container_of(work, struct cn10k_ipsec,
525 refill_npa_inline_ipsecq);
526 struct otx2_nic *pfvf = container_of(ipsec, struct otx2_nic, ipsec);
527 struct otx2_pool *pool = NULL;
> 528 struct otx2_qset *qset = NULL;
529 u64 val, *ptr, op_int = 0, count;
530 int err, pool_id, idx;
531 dma_addr_t bufptr;
532
533 qset = &pfvf->qset;
534
535 val = otx2_read64(pfvf, NPA_LF_QINTX_INT(0));
536 if (!(val & 1))
537 return;
538
539 ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_INT);
540 val = otx2_atomic64_add(((u64)pfvf->ipsec.inb_ipsec_pool << 44), ptr);
541
542 /* Error interrupt bits */
543 if (val & 0xff)
544 op_int = (val & 0xff);
545
546 /* Refill buffers on a Threshold interrupt */
547 if (val & (1 << 16)) {
548 /* Get the current number of buffers consumed */
549 ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_CNT);
550 count = otx2_atomic64_add(((u64)pfvf->ipsec.inb_ipsec_pool << 44), ptr);
551 count &= GENMASK_ULL(35, 0);
552
553 /* Refill */
554 pool_id = pfvf->ipsec.inb_ipsec_pool;
555 pool = &pfvf->qset.pool[pool_id];
556
557 for (idx = 0; idx < count; idx++) {
558 err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, idx);
559 if (err) {
560 netdev_err(pfvf->netdev,
561 "Insufficient memory for IPsec pool buffers\n");
562 break;
563 }
564 pfvf->hw_ops->aura_freeptr(pfvf, pool_id,
565 bufptr + OTX2_HEAD_ROOM);
566 }
567
568 op_int |= (1 << 16);
569 }
570
571 /* Clear/ACK Interrupt */
572 if (op_int)
573 otx2_write64(pfvf, NPA_LF_AURA_OP_INT,
574 ((u64)pfvf->ipsec.inb_ipsec_pool << 44) | op_int);
575 }
576
577 static irqreturn_t cn10k_ipsec_npa_inb_ipsecq_intr_handler(int irq, void *data)
578 {
579 struct otx2_nic *pf = data;
580
581 schedule_work(&pf->ipsec.refill_npa_inline_ipsecq);
582
583 return IRQ_HANDLED;
584 }
585
586 static int cn10k_inb_cpt_init(struct net_device *netdev)
587 {
588 struct otx2_nic *pfvf = netdev_priv(netdev);
589 int ret = 0, vec;
590 char *irq_name;
> 591 void *ptr;
592 u64 val;
593
594 ret = cn10k_ipsec_setup_nix_rx_hw_resources(pfvf);
595 if (ret) {
596 netdev_err(netdev, "Failed to setup NIX HW resources for IPsec\n");
597 return ret;
598 }
599
600 /* Work entry for refilling the NPA queue for ingress inline IPSec */
601 INIT_WORK(&pfvf->ipsec.refill_npa_inline_ipsecq,
602 cn10k_ipsec_npa_refill_inb_ipsecq);
603
604 /* Register NPA interrupt */
605 vec = pfvf->hw.npa_msixoff;
606 irq_name = &pfvf->hw.irq_name[vec * NAME_SIZE];
607 snprintf(irq_name, NAME_SIZE, "%s-npa-qint", pfvf->netdev->name);
608
609 ret = request_irq(pci_irq_vector(pfvf->pdev, vec),
610 cn10k_ipsec_npa_inb_ipsecq_intr_handler, 0,
611 irq_name, pfvf);
612 if (ret) {
613 dev_err(pfvf->dev,
614 "RVUPF%d: IRQ registration failed for NPA QINT%d\n",
615 rvu_get_pf(pfvf->pcifunc), 0);
616 return ret;
617 }
618
619 /* Enable NPA threshold interrupt */
620 ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_INT);
621 val = BIT_ULL(43) | BIT_ULL(17);
622 otx2_write64(pfvf, NPA_LF_AURA_OP_INT,
623 ((u64)pfvf->ipsec.inb_ipsec_pool << 44) | val);
624
625 /* Enable interrupt */
626 otx2_write64(pfvf, NPA_LF_QINTX_ENA_W1S(0), BIT_ULL(0));
627
628 return ret;
629 }
630
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass
2025-05-02 13:19 ` [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass Tanmay Jagdale
2025-05-07 7:58 ` kernel test robot
@ 2025-05-07 12:36 ` Simon Horman
2025-05-13 5:18 ` Tanmay Jagdale
1 sibling, 1 reply; 43+ messages in thread
From: Simon Horman @ 2025-05-07 12:36 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian, Rakesh Kudurumalla
On Fri, May 02, 2025 at 06:49:47PM +0530, Tanmay Jagdale wrote:
> From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
>
> Implemented mailbox to add mechanism to allocate a
> rq_mask and apply to nixlf to toggle RQ context fields
> for CPT second pass packets.
>
> Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
> index 7fa98aeb3663..18e2a48e2de1 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
> @@ -544,6 +544,7 @@ void rvu_program_channels(struct rvu *rvu)
>
> void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw)
> {
> + struct rvu_hwinfo *hw = rvu->hw;
> int blkaddr = nix_hw->blkaddr;
> u64 cfg;
>
> @@ -558,6 +559,16 @@ void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw)
> cfg = rvu_read64(rvu, blkaddr, NIX_AF_CFG);
> cfg |= BIT_ULL(1) | BIT_ULL(2);
As per my comments on an earlier patch in this series:
bits 1 and 2 have meaning. It would be nice to use a #define to
convey this meaning to the reader.
> rvu_write64(rvu, blkaddr, NIX_AF_CFG, cfg);
> +
> + cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
> +
> + if (!(cfg & BIT_ULL(62))) {
> + hw->cap.second_cpt_pass = false;
> + return;
> + }
> +
> + hw->cap.second_cpt_pass = true;
> + nix_hw->rq_msk.total = NIX_RQ_MSK_PROFILES;
> }
>
> void rvu_apr_block_cn10k_init(struct rvu *rvu)
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> index 6bd995c45dad..b15fd331facf 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> @@ -6612,3 +6612,123 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
>
> return ret;
> }
> +
> +static inline void
> +configure_rq_mask(struct rvu *rvu, int blkaddr, int nixlf,
> + u8 rq_mask, bool enable)
> +{
> + u64 cfg, reg;
> +
> + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf));
> + reg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_CFG(nixlf));
> + if (enable) {
> + cfg |= BIT_ULL(43);
> + reg = (reg & ~GENMASK_ULL(36, 35)) | ((u64)rq_mask << 35);
> + } else {
> + cfg &= ~BIT_ULL(43);
> + reg = (reg & ~GENMASK_ULL(36, 35));
> + }
Likewise for the bit, mask, and shift here.
And I think that using FIELD_PREP with another mask in place of the shift
is also appropriate here.
> + rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf), cfg);
> + rvu_write64(rvu, blkaddr, NIX_AF_LFX_CFG(nixlf), reg);
> +}
> +
> +static inline void
> +configure_spb_cpt(struct rvu *rvu, int blkaddr, int nixlf,
> + struct nix_rq_cpt_field_mask_cfg_req *req, bool enable)
> +{
> + u64 cfg;
> +
> + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf));
> + if (enable) {
> + cfg |= BIT_ULL(37);
> + cfg &= ~GENMASK_ULL(42, 38);
> + cfg |= ((u64)req->ipsec_cfg1.spb_cpt_sizem1 << 38);
> + cfg &= ~GENMASK_ULL(63, 44);
> + cfg |= ((u64)req->ipsec_cfg1.spb_cpt_aura << 44);
> + } else {
> + cfg &= ~BIT_ULL(37);
> + cfg &= ~GENMASK_ULL(42, 38);
> + cfg &= ~GENMASK_ULL(63, 44);
> + }
And here too.
> + rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf), cfg);
> +}
...
> +int rvu_mbox_handler_nix_lf_inline_rq_cfg(struct rvu *rvu,
> + struct nix_rq_cpt_field_mask_cfg_req *req,
> + struct msg_rsp *rsp)
It would be nice to reduce this to 80 columns wide or less.
Perhaps like this?
int
rvu_mbox_handler_nix_lf_inline_rq_cfg(struct rvu *rvu,
struct nix_rq_cpt_field_mask_cfg_req *req,
struct msg_rsp *rsp)
Or perhaps by renaming nix_rq_cpt_field_mask_cfg_req to be shorter.
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> index 245e69fcbff9..e5e005d5d71e 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> @@ -433,6 +433,8 @@
> #define NIX_AF_MDQX_IN_MD_COUNT(a) (0x14e0 | (a) << 16)
> #define NIX_AF_SMQX_STATUS(a) (0x730 | (a) << 16)
> #define NIX_AF_MDQX_OUT_MD_COUNT(a) (0xdb0 | (a) << 16)
> +#define NIX_AF_RX_RQX_MASKX(a, b) (0x4A40 | (a) << 16 | (b) << 3)
> +#define NIX_AF_RX_RQX_SETX(a, b) (0x4A80 | (a) << 16 | (b) << 3)
FIELD_PREP could be used here in conjunction with #defines
for appropriate masks here too.
>
> #define NIX_PRIV_AF_INT_CFG (0x8000000)
> #define NIX_PRIV_LFX_CFG (0x8000010)
...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation
2025-05-02 13:19 ` [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation Tanmay Jagdale
2025-05-03 16:12 ` Kalesh Anakkur Purayil
@ 2025-05-07 12:45 ` Simon Horman
2025-05-13 6:12 ` Tanmay Jagdale
1 sibling, 1 reply; 43+ messages in thread
From: Simon Horman @ 2025-05-07 12:45 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian, Kiran Kumar K,
Nithin Dabilpuram
On Fri, May 02, 2025 at 06:49:48PM +0530, Tanmay Jagdale wrote:
> From: Kiran Kumar K <kirankumark@marvell.com>
>
> In case of IPsec, the inbound SPI can be random. HW supports mapping
> SPI to an arbitrary SA index. SPI to SA index is done using a lookup
> in NPC cam entry with key as SPI, MATCH_ID, LFID. Adding Mbox API
> changes to configure the match table.
>
> Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> index 715efcc04c9e..5cebf10a15a7 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> @@ -326,6 +326,10 @@ M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg, \
> M(NIX_LF_INLINE_RQ_CFG, 0x8024, nix_lf_inline_rq_cfg, \
> nix_rq_cpt_field_mask_cfg_req, \
> msg_rsp) \
> +M(NIX_SPI_TO_SA_ADD, 0x8026, nix_spi_to_sa_add, nix_spi_to_sa_add_req, \
> + nix_spi_to_sa_add_rsp) \
> +M(NIX_SPI_TO_SA_DELETE, 0x8027, nix_spi_to_sa_delete, nix_spi_to_sa_delete_req, \
> + msg_rsp) \
Please keep line length to 80 columns or less in Networking code,
unless it reduces readability.
In this case perhaps:
M(NIX_SPI_TO_SA_DELETE, 0x8027, nix_spi_to_sa_delete, \
nix_spi_to_sa_delete_req, \
msg_rsp) \
Likewise throughout this patch (set).
checkpatch.pl --max-line-length=80 is your friend.
> M(NIX_MCAST_GRP_CREATE, 0x802b, nix_mcast_grp_create, nix_mcast_grp_create_req, \
> nix_mcast_grp_create_rsp) \
> M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req, \
...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 09/15] octeontx2-pf: ipsec: Allocate Ingress SA table
2025-05-02 13:19 ` [net-next PATCH v1 09/15] octeontx2-pf: ipsec: Allocate Ingress SA table Tanmay Jagdale
@ 2025-05-07 12:56 ` Simon Horman
2025-05-22 9:21 ` Tanmay Jagdale
0 siblings, 1 reply; 43+ messages in thread
From: Simon Horman @ 2025-05-07 12:56 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian
On Fri, May 02, 2025 at 06:49:50PM +0530, Tanmay Jagdale wrote:
> Every NIX LF has the facility to maintain a contiguous SA table that
> is used by NIX RX to find the exact SA context pointer associated with
> a particular flow. Allocate a 128-entry SA table where each entry is of
> 2048 bytes which is enough to hold the complete inbound SA context.
>
> Add the structure definitions for SA context (cn10k_rx_sa_s) and
> SA bookkeeping information (ctx_inb_ctx_info).
>
> Also, initialize the inb_sw_ctx_list to track all the SA's and their
> associated NPC rules and hash table related data.
>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
...
> @@ -146,6 +169,76 @@ struct cn10k_tx_sa_s {
> u64 hw_ctx[6]; /* W31 - W36 */
> };
>
> +struct cn10k_rx_sa_s {
> + u64 inb_ar_win_sz : 3; /* W0 */
> + u64 hard_life_dec : 1;
> + u64 soft_life_dec : 1;
> + u64 count_glb_octets : 1;
> + u64 count_glb_pkts : 1;
> + u64 count_mib_bytes : 1;
> + u64 count_mib_pkts : 1;
> + u64 hw_ctx_off : 7;
> + u64 ctx_id : 16;
> + u64 orig_pkt_fabs : 1;
> + u64 orig_pkt_free : 1;
> + u64 pkind : 6;
> + u64 rsvd_w0_40 : 1;
> + u64 eth_ovrwr : 1;
> + u64 pkt_output : 2;
> + u64 pkt_format : 1;
> + u64 defrag_opt : 2;
> + u64 x2p_dst : 1;
> + u64 ctx_push_size : 7;
> + u64 rsvd_w0_55 : 1;
> + u64 ctx_hdr_size : 2;
> + u64 aop_valid : 1;
> + u64 rsvd_w0_59 : 1;
> + u64 ctx_size : 4;
> +
> + u64 rsvd_w1_31_0 : 32; /* W1 */
> + u64 cookie : 32;
> +
> + u64 sa_valid : 1; /* W2 Control Word */
> + u64 sa_dir : 1;
> + u64 rsvd_w2_2_3 : 2;
> + u64 ipsec_mode : 1;
> + u64 ipsec_protocol : 1;
> + u64 aes_key_len : 2;
> + u64 enc_type : 3;
> + u64 life_unit : 1;
> + u64 auth_type : 4;
> + u64 encap_type : 2;
> + u64 et_ovrwr_ddr_en : 1;
> + u64 esn_en : 1;
> + u64 tport_l4_incr_csum : 1;
> + u64 iphdr_verify : 2;
> + u64 udp_ports_verify : 1;
> + u64 l2_l3_hdr_on_error : 1;
> + u64 rsvd_w25_31 : 7;
> + u64 spi : 32;
As I understand it, this driver is only intended to run on arm64 systems.
While it is also possible, with COMPILE_TEST test, to compile the driver
on for 64-bit systems.
So, given the first point above, this may be moot. But the above
assumes that the byte order of the host is the same as the device.
Or perhaps more to the point, it has been written for a little-endian
host and the device is expecting the data in that byte order.
But u64 is supposed to represent host byte order. And, in my understanding
of things, this is the kind of problem that FIELD_PREP and FIELD_GET are
intended to avoid, when combined on endian-specific integer types (in this
case __le64 seems appropriate).
I do hesitate in bringing this up, as the above very likely works on
all systems on which this code is intended to run. But I do so
because it is not correct on all systems for which this code can be
compiled. And thus seems somehow misleading.
> +
> + u64 w3; /* W3 */
> +
> + u8 cipher_key[32]; /* W4 - W7 */
> + u32 rsvd_w8_0_31; /* W8 : IV */
> + u32 iv_gcm_salt;
> + u64 rsvd_w9; /* W9 */
> + u64 rsvd_w10; /* W10 : UDP Encap */
> + u32 dest_ipaddr; /* W11 - Tunnel mode: outer src and dest ipaddr */
> + u32 src_ipaddr;
> + u64 rsvd_w12_w30[19]; /* W12 - W30 */
> +
> + u64 ar_base; /* W31 */
> + u64 ar_valid_mask; /* W32 */
> + u64 hard_sa_life; /* W33 */
> + u64 soft_sa_life; /* W34 */
> + u64 mib_octs; /* W35 */
> + u64 mib_pkts; /* W36 */
> + u64 ar_winbits; /* W37 */
> +
> + u64 rsvd_w38_w100[63];
> +};
> +
> /* CPT instruction parameter-1 */
> #define CN10K_IPSEC_INST_PARAM1_DIS_L4_CSUM 0x1
> #define CN10K_IPSEC_INST_PARAM1_DIS_L3_CSUM 0x2
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows
2025-05-02 13:19 ` [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows Tanmay Jagdale
2025-05-07 10:03 ` kernel test robot
@ 2025-05-07 13:46 ` Simon Horman
2025-05-22 9:56 ` Tanmay Jagdale
1 sibling, 1 reply; 43+ messages in thread
From: Simon Horman @ 2025-05-07 13:46 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian
On Fri, May 02, 2025 at 06:49:51PM +0530, Tanmay Jagdale wrote:
> A incoming encrypted IPsec packet in the RVU NIX hardware needs
> to be classified for inline fastpath processing and then assinged
nit: assigned
checkpatch.pl --codespell is your friend
> a RQ and Aura pool before sending to CPT for decryption.
>
> Create a dedicated RQ, Aura and Pool with the following setup
> specifically for IPsec flows:
> - Set ipsech_en, ipsecd_drop_en in RQ context to enable hardware
> fastpath processing for IPsec flows.
> - Configure the dedicated Aura to raise an interrupt when
> it's buffer count drops below a threshold value so that the
> buffers can be replenished from the CPU.
>
> The RQ, Aura and Pool contexts are initialized only when esp-hw-offload
> feature is enabled via ethtool.
>
> Also, move some of the RQ context macro definitions to otx2_common.h
> so that they can be used in the IPsec driver as well.
>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
...
> +static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf)
> +{
> + struct otx2_hw *hw = &pfvf->hw;
> + int stack_pages, pool_id;
> + struct otx2_pool *pool;
> + int err, ptr, num_ptrs;
> + dma_addr_t bufptr;
> +
> + num_ptrs = 256;
> + pool_id = pfvf->ipsec.inb_ipsec_pool;
> + stack_pages = (num_ptrs + hw->stack_pg_ptrs - 1) / hw->stack_pg_ptrs;
> +
> + mutex_lock(&pfvf->mbox.lock);
> +
> + /* Initialize aura context */
> + err = cn10k_ipsec_ingress_aura_init(pfvf, pool_id, pool_id, num_ptrs);
> + if (err)
> + goto fail;
> +
> + /* Initialize pool */
> + err = otx2_pool_init(pfvf, pool_id, stack_pages, num_ptrs, pfvf->rbsize, AURA_NIX_RQ);
> + if (err)
This appears to leak pool->fc_addr.
> + goto fail;
> +
> + /* Flush accumulated messages */
> + err = otx2_sync_mbox_msg(&pfvf->mbox);
> + if (err)
> + goto pool_fail;
> +
> + /* Allocate pointers and free them to aura/pool */
> + pool = &pfvf->qset.pool[pool_id];
> + for (ptr = 0; ptr < num_ptrs; ptr++) {
> + err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr);
> + if (err) {
> + err = -ENOMEM;
> + goto pool_fail;
> + }
> + pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr + OTX2_HEAD_ROOM);
> + }
> +
> + /* Initialize RQ and map buffers from pool_id */
> + err = cn10k_ipsec_ingress_rq_init(pfvf, pfvf->ipsec.inb_ipsec_rq, pool_id);
> + if (err)
> + goto pool_fail;
> +
> + mutex_unlock(&pfvf->mbox.lock);
> + return 0;
> +
> +pool_fail:
> + mutex_unlock(&pfvf->mbox.lock);
> + qmem_free(pfvf->dev, pool->stack);
> + qmem_free(pfvf->dev, pool->fc_addr);
> + page_pool_destroy(pool->page_pool);
> + devm_kfree(pfvf->dev, pool->xdp);
It is not clear to me why devm_kfree() is being called here.
I didn't look deeply. But I think it is likely that
either pool->xdp should be freed when the device is released.
Or pool->xdp should not be allocated (and freed) using devm functions.
> + pool->xsk_pool = NULL;
The clean-up of pool->stack, pool->page_pool), pool->xdp, and
pool->xsk_pool, all seem to unwind initialisation performed by
otx2_pool_init(). And appear to be duplicated elsewhere.
I would suggest adding a helper for that.
> +fail:
> + otx2_mbox_reset(&pfvf->mbox.mbox, 0);
> + return err;
> +}
...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 11/15] octeontx2-pf: ipsec: Handle NPA threshold interrupt
2025-05-02 13:19 ` [net-next PATCH v1 11/15] octeontx2-pf: ipsec: Handle NPA threshold interrupt Tanmay Jagdale
2025-05-07 12:04 ` kernel test robot
@ 2025-05-07 14:20 ` Simon Horman
1 sibling, 0 replies; 43+ messages in thread
From: Simon Horman @ 2025-05-07 14:20 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian
On Fri, May 02, 2025 at 06:49:52PM +0530, Tanmay Jagdale wrote:
> The NPA Aura pool that is dedicated for 1st pass inline IPsec flows
> raises an interrupt when the buffers of that aura_id drop below a
> threshold value.
>
> Add the following changes to handle this interrupt
> - Increase the number of MSIX vectors requested for the PF/VF to
> include NPA vector.
> - Create a workqueue (refill_npa_inline_ipsecq) to allocate and
> refill buffers to the pool.
> - When the interrupt is raised, schedule the workqueue entry,
> cn10k_ipsec_npa_refill_inb_ipsecq(), where the current count of
> consumed buffers is determined via NPA_LF_AURA_OP_CNT and then
> replenished.
>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> index b88c1b4c5839..365327ab9079 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> @@ -519,10 +519,77 @@ static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf)
> return err;
> }
>
> +static void cn10k_ipsec_npa_refill_inb_ipsecq(struct work_struct *work)
> +{
> + struct cn10k_ipsec *ipsec = container_of(work, struct cn10k_ipsec,
> + refill_npa_inline_ipsecq);
> + struct otx2_nic *pfvf = container_of(ipsec, struct otx2_nic, ipsec);
> + struct otx2_pool *pool = NULL;
> + struct otx2_qset *qset = NULL;
> + u64 val, *ptr, op_int = 0, count;
> + int err, pool_id, idx;
> + dma_addr_t bufptr;
> +
> + qset = &pfvf->qset;
> +
> + val = otx2_read64(pfvf, NPA_LF_QINTX_INT(0));
> + if (!(val & 1))
> + return;
> +
> + ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_INT);
Sparse complains about __iomem annotations around here:
.../cn10k_ipsec.c:539:13: warning: incorrect type in assignment (different address spaces)
.../cn10k_ipsec.c:539:13: expected unsigned long long [usertype] *ptr
.../cn10k_ipsec.c:539:13: got void [noderef] __iomem *
.../cn10k_ipsec.c:549:21: warning: incorrect type in assignment (different address spaces)
.../cn10k_ipsec.c:549:21: expected unsigned long long [usertype] *ptr
.../cn10k_ipsec.c:549:21: got void [noderef] __iomem *
.../cn10k_ipsec.c:620:13: warning: incorrect type in assignment (different address spaces)
.../cn10k_ipsec.c:620:13: expected void *ptr
.../cn10k_ipsec.c:620:13: got void [noderef] __iomem *
> + val = otx2_atomic64_add(((u64)pfvf->ipsec.inb_ipsec_pool << 44), ptr);
> +
> + /* Error interrupt bits */
> + if (val & 0xff)
> + op_int = (val & 0xff);
> +
> + /* Refill buffers on a Threshold interrupt */
> + if (val & (1 << 16)) {
> + /* Get the current number of buffers consumed */
> + ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_CNT);
> + count = otx2_atomic64_add(((u64)pfvf->ipsec.inb_ipsec_pool << 44), ptr);
> + count &= GENMASK_ULL(35, 0);
> +
> + /* Refill */
> + pool_id = pfvf->ipsec.inb_ipsec_pool;
> + pool = &pfvf->qset.pool[pool_id];
> +
> + for (idx = 0; idx < count; idx++) {
> + err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, idx);
> + if (err) {
> + netdev_err(pfvf->netdev,
> + "Insufficient memory for IPsec pool buffers\n");
> + break;
> + }
> + pfvf->hw_ops->aura_freeptr(pfvf, pool_id,
> + bufptr + OTX2_HEAD_ROOM);
> + }
> +
> + op_int |= (1 << 16);
> + }
> +
> + /* Clear/ACK Interrupt */
> + if (op_int)
> + otx2_write64(pfvf, NPA_LF_AURA_OP_INT,
> + ((u64)pfvf->ipsec.inb_ipsec_pool << 44) | op_int);
> +}
...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 13/15] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries
2025-05-02 13:19 ` [net-next PATCH v1 13/15] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries Tanmay Jagdale
@ 2025-05-07 15:58 ` Simon Horman
2025-05-22 10:01 ` Tanmay Jagdale
0 siblings, 1 reply; 43+ messages in thread
From: Simon Horman @ 2025-05-07 15:58 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian
On Fri, May 02, 2025 at 06:49:54PM +0530, Tanmay Jagdale wrote:
> NPC rule for IPsec flows
> ------------------------
> Incoming IPsec packets are first classified for hardware fastpath
> processing in the NPC block. Hence, allocate an MCAM entry in NPC
> using the MCAM_ALLOC_ENTRY mailbox to add a rule for IPsec flow
> classification.
>
> Then, install an NPC rule at this entry for packet classification
> based on ESP header and SPI value with match action as UCAST_IPSEC.
> Also, these packets need to be directed to the dedicated receive
> queue so provide the RQ index as part of NPC_INSTALL_FLOW mailbox.
> Add a function to delete NPC rule as well.
>
> SPI-to-SA match table
> ---------------------
> NIX RX maintains a common hash table for matching the SPI value from
> in ESP packet to the SA index associated with it. This table has 2K entries
> with 4 ways. When a packet is received with action as UCAST_IPSEC, NIXRX
> uses the SPI from the packet header to perform lookup in the SPI-to-SA
> hash table. This lookup, if successful, returns an SA index that is used
> by NIXRX to calculate the exact SA context address and programs it in
> the CPT_INST_S before submitting the packet to CPT for decryption.
>
> Add functions to install the delete an entry from this table via the
> NIX_SPI_TO_SA_ADD and NIX_SPI_TO_SA_DELETE mailbox calls respectively.
>
> When the RQs are changed at runtime via ethtool, RVU PF driver frees all
> the resources and goes through reinitialization with the new set of receive
> queues. As part of this flow, the UCAST_IPSEC NPC rules that were installed
> by the RVU PF/VF driver have to be reconfigured with the new RQ index.
>
> So, delete the NPC rules when the interface is stopped via otx2_stop().
> When otx2_open() is called, re-install the NPC flow and re-initialize the
> SPI-to-SA table for every SA context that was previously installed.
>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
> ---
> .../marvell/octeontx2/nic/cn10k_ipsec.c | 201 ++++++++++++++++++
> .../marvell/octeontx2/nic/cn10k_ipsec.h | 7 +
> .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 9 +
> 3 files changed, 217 insertions(+)
>
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
...
> +static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct xfrm_state *x,
> + struct cn10k_inb_sw_ctx_info *inb_ctx_info)
> +{
> + struct npc_install_flow_req *req;
> + int err;
> +
> + mutex_lock(&pfvf->mbox.lock);
> +
> + req = otx2_mbox_alloc_msg_npc_install_flow(&pfvf->mbox);
> + if (!req) {
> + err = -ENOMEM;
> + goto out;
> + }
> +
> + req->entry = inb_ctx_info->npc_mcam_entry;
> + req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI) | BIT(NPC_DMAC);
> + req->intf = NIX_INTF_RX;
> + req->index = pfvf->ipsec.inb_ipsec_rq;
> + req->match_id = 0xfeed;
> + req->channel = pfvf->hw.rx_chan_base;
> + req->op = NIX_RX_ACTIONOP_UCAST_IPSEC;
> + req->set_cntr = 1;
> + req->packet.spi = x->id.spi;
> + req->mask.spi = 0xffffffff;
I realise that the value is isomorphic, but I would use the following
so that the rvalue has an endian annotation that matches the lvalue.
req->mask.spi = cpu_to_be32(0xffffffff);
Flagged by Sparse.
> +
> + /* Send message to AF */
> + err = otx2_sync_mbox_msg(&pfvf->mbox);
> +out:
> + mutex_unlock(&pfvf->mbox.lock);
> + return err;
> +}
...
> +static int cn10k_inb_delete_spi_to_sa_match_entry(struct otx2_nic *pfvf,
> + struct cn10k_inb_sw_ctx_info *inb_ctx_info)
gcc-14.2.0 (at least) complains that cn10k_inb_delete_spi_to_sa_match_entry
is unused.
Likewise for cn10k_inb_delete_flow and cn10k_inb_delete_spi_to_sa_match_entry.
I'm unsure of the best way to address this but it would be nice
to avoid breaking build bisection for such a trivial reason.
Some ideas:
* Maybe it is possible to squash this and the last patch,
or bring part of the last patch into this patch, or otherwise
rearrange things to avoid this problem.
* Add temporary __maybe_unusd annotations.
(I'd consider this a last resort.)
...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 14/15] octeontx2-pf: ipsec: Process CPT metapackets
2025-05-02 13:19 ` [net-next PATCH v1 14/15] octeontx2-pf: ipsec: Process CPT metapackets Tanmay Jagdale
@ 2025-05-07 16:30 ` Simon Horman
2025-05-23 4:08 ` Tanmay Jagdale
0 siblings, 1 reply; 43+ messages in thread
From: Simon Horman @ 2025-05-07 16:30 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian
On Fri, May 02, 2025 at 06:49:55PM +0530, Tanmay Jagdale wrote:
> CPT hardware forwards decrypted IPsec packets to NIX via the X2P bus
> as metapackets which are of 256 bytes in length. Each metapacket
> contains CPT_PARSE_HDR_S and initial bytes of the decrypted packet
> that helps NIX RX in classifying and submitting to CPU. Additionally,
> CPT also sets BIT(11) of the channel number to indicate that it's a
> 2nd pass packet from CPT.
>
> Since the metapackets are not complete packets, they don't have to go
> through L3/L4 layer length and checksum verification so these are
> disabled via the NIX_LF_INLINE_RQ_CFG mailbox during IPsec initialization.
>
> The CPT_PARSE_HDR_S contains a WQE pointer to the complete decrypted
> packet. Add code in the rx NAPI handler to parse the header and extract
> WQE pointer. Later, use this WQE pointer to construct the skb, set the
> XFRM packet mode flags to indicate successful decryption before submitting
> it to the network stack.
>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
> ---
> .../marvell/octeontx2/nic/cn10k_ipsec.c | 61 +++++++++++++++++++
> .../marvell/octeontx2/nic/cn10k_ipsec.h | 47 ++++++++++++++
> .../marvell/octeontx2/nic/otx2_struct.h | 16 +++++
> .../marvell/octeontx2/nic/otx2_txrx.c | 25 +++++++-
> 4 files changed, 147 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> index 91c8f13b6e48..bebf5cdedee4 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> @@ -346,6 +346,67 @@ static int cn10k_outb_cpt_init(struct net_device *netdev)
> return ret;
> }
>
> +struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
> + struct nix_rx_sg_s *sg,
> + struct sk_buff *skb,
> + int qidx)
> +{
> + struct nix_wqe_rx_s *wqe = NULL;
> + u64 *seg_addr = &sg->seg_addr;
> + struct cpt_parse_hdr_s *cptp;
> + struct xfrm_offload *xo;
> + struct otx2_pool *pool;
> + struct xfrm_state *xs;
> + struct sec_path *sp;
> + u64 *va_ptr;
> + void *va;
> + int i;
> +
> + /* CPT_PARSE_HDR_S is present in the beginning of the buffer */
> + va = phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain, *seg_addr));
> +
> + /* Convert CPT_PARSE_HDR_S from BE to LE */
> + va_ptr = (u64 *)va;
phys_to_virt returns a void *. And there is no need to explicitly cast
another pointer type to or from a void *.
So probably this can simply be:
va_ptr = phys_to_virt(...);
> + for (i = 0; i < (sizeof(struct cpt_parse_hdr_s) / sizeof(u64)); i++)
> + va_ptr[i] = be64_to_cpu(va_ptr[i]);
Please don't use the same variable to hold both big endian and
host byte order values. Because tooling can no longer provide
information about endian mismatches.
Flagged by Sparse.
Also, isn't only the long word that exactly comprises the
wqe_ptr field of cpt_parse_hdr_s used? If so, perhaps
only that portion needs to be converted to host byte order?
I'd explore describing the members of struct cpt_parse_hdr_s as __be64.
And use FIELD_PREP and FIELD_GET to deal with parts of each __be64.
I think that would lead to a simpler implementation.
> +
> + cptp = (struct cpt_parse_hdr_s *)va;
> +
> + /* Convert the wqe_ptr from CPT_PARSE_HDR_S to a CPU usable pointer */
> + wqe = (struct nix_wqe_rx_s *)phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain,
> + cptp->wqe_ptr));
There is probably no need to cast from void * here either.
wqe = phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain,
cptp->wqe_ptr));
> +
> + /* Get the XFRM state pointer stored in SA context */
> + va_ptr = pfvf->ipsec.inb_sa->base +
> + (cptp->cookie * pfvf->ipsec.sa_tbl_entry_sz) + 1024;
> + xs = (struct xfrm_state *)*va_ptr;
Maybe this can be more succinctly written as follows?
xs = pfvf->ipsec.inb_sa->base +
(cptp->cookie * pfvf->ipsec.sa_tbl_entry_sz) + 1024;
> +
> + /* Set XFRM offload status and flags for successful decryption */
> + sp = secpath_set(skb);
> + if (!sp) {
> + netdev_err(pfvf->netdev, "Failed to secpath_set\n");
> + wqe = NULL;
> + goto err_out;
> + }
> +
> + rcu_read_lock();
> + xfrm_state_hold(xs);
> + rcu_read_unlock();
> +
> + sp->xvec[sp->len++] = xs;
> + sp->olen++;
> +
> + xo = xfrm_offload(skb);
> + xo->flags = CRYPTO_DONE;
> + xo->status = CRYPTO_SUCCESS;
> +
> +err_out:
> + /* Free the metapacket memory here since it's not needed anymore */
> + pool = &pfvf->qset.pool[qidx];
> + otx2_free_bufs(pfvf, pool, *seg_addr - OTX2_HEAD_ROOM, pfvf->rbsize);
> + return wqe;
> +}
> +
> static int cn10k_inb_alloc_mcam_entry(struct otx2_nic *pfvf,
> struct cn10k_inb_sw_ctx_info *inb_ctx_info)
> {
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
> index aad5ebea64ef..68046e377486 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
> @@ -8,6 +8,7 @@
> #define CN10K_IPSEC_H
>
> #include <linux/types.h>
> +#include "otx2_struct.h"
>
> DECLARE_STATIC_KEY_FALSE(cn10k_ipsec_sa_enabled);
>
> @@ -302,6 +303,41 @@ struct cpt_sg_s {
> u64 rsvd_63_50 : 14;
> };
>
> +/* CPT Parse Header Structure for Inbound packets */
> +struct cpt_parse_hdr_s {
> + /* Word 0 */
> + u64 cookie : 32;
> + u64 match_id : 16;
> + u64 err_sum : 1;
> + u64 reas_sts : 4;
> + u64 reserved_53 : 1;
> + u64 et_owr : 1;
> + u64 pkt_fmt : 1;
> + u64 pad_len : 3;
> + u64 num_frags : 3;
> + u64 pkt_out : 2;
> +
> + /* Word 1 */
> + u64 wqe_ptr;
> +
> + /* Word 2 */
> + u64 frag_age : 16;
> + u64 res_32_16 : 16;
> + u64 pf_func : 16;
> + u64 il3_off : 8;
> + u64 fi_pad : 3;
> + u64 fi_offset : 5;
> +
> + /* Word 3 */
> + u64 hw_ccode : 8;
> + u64 uc_ccode : 8;
> + u64 res3_32_16 : 16;
> + u64 spi : 32;
> +
> + /* Word 4 */
> + u64 misc;
> +};
> +
> /* CPT LF_INPROG Register */
> #define CPT_LF_INPROG_INFLIGHT GENMASK_ULL(8, 0)
> #define CPT_LF_INPROG_GRB_CNT GENMASK_ULL(39, 32)
...
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
...
> @@ -355,8 +359,25 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
> if (unlikely(!skb))
> return;
>
> - start = (void *)sg;
> - end = start + ((cqe->parse.desc_sizem1 + 1) * 16);
> + if (parse->chan & 0x800) {
> + orig_pkt_wqe = cn10k_ipsec_process_cpt_metapkt(pfvf, sg, skb, cq->cq_idx);
> + if (!orig_pkt_wqe) {
> + netdev_err(pfvf->netdev, "Invalid WQE in CPT metapacket\n");
> + napi_free_frags(napi);
> + cq->pool_ptrs++;
> + return;
> + }
> + /* Switch *sg to the orig_pkt_wqe's *sg which has the actual
> + * complete decrypted packet by CPT.
> + */
> + sg = &orig_pkt_wqe->sg;
> + start = (void *)sg;
I don't think this cast is necessary, start is a void *.
Likewise below.
> + end = start + ((orig_pkt_wqe->parse.desc_sizem1 + 1) * 16);
> + } else {
> + start = (void *)sg;
> + end = start + ((cqe->parse.desc_sizem1 + 1) * 16);
> + }
The (size + 1) * 16 calculation seems to be repeated.
Perhaps a helper function is appropriate.
> +
> while (start < end) {
> sg = (struct nix_rx_sg_s *)start;
> seg_addr = &sg->seg_addr;
> --
> 2.43.0
>
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 15/15] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows
2025-05-02 13:19 ` [net-next PATCH v1 15/15] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows Tanmay Jagdale
2025-05-07 6:42 ` kernel test robot
@ 2025-05-07 18:31 ` Simon Horman
1 sibling, 0 replies; 43+ messages in thread
From: Simon Horman @ 2025-05-07 18:31 UTC (permalink / raw)
To: Tanmay Jagdale
Cc: bbrezillon, arno, schalla, herbert, davem, sgoutham, lcherian,
gakula, jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba,
pabeni, bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
krzysztof.kozlowski, giovanni.cabiddu, linux-crypto, linux-kernel,
netdev, rkannoth, sumang, gcherian
On Fri, May 02, 2025 at 06:49:56PM +0530, Tanmay Jagdale wrote:
> Add XFRM state hook for inbound flows and configure the following:
> - Install an NPC rule to classify the 1st pass IPsec packets and
> direct them to the dedicated RQ
> - Allocate a free entry from the SA table and populate it with the
> SA context details based on xfrm state data.
> - Create a mapping of the SPI value to the SA table index. This is
> used by NIXRX to calculate the exact SA context pointer address
> based on the SPI in the packet.
> - Prepare the CPT SA context to decrypt buffer in place and the
> write it the CPT hardware via LMT operation.
> - When the XFRM state is deleted, clear this SA in CPT hardware.
>
> Also add XFRM Policy hooks to allow successful offload of inbound
> PACKET_MODE.
>
> Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
> ---
> .../marvell/octeontx2/nic/cn10k_ipsec.c | 449 ++++++++++++++++--
> 1 file changed, 419 insertions(+), 30 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> index bebf5cdedee4..6441598c7e0f 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> @@ -448,7 +448,7 @@ static int cn10k_inb_alloc_mcam_entry(struct otx2_nic *pfvf,
> return err;
> }
>
> -static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct xfrm_state *x,
> +static int cn10k_inb_install_flow(struct otx2_nic *pfvf,
> struct cn10k_inb_sw_ctx_info *inb_ctx_info)
> {
> struct npc_install_flow_req *req;
> @@ -463,14 +463,14 @@ static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct xfrm_state *x,
> }
>
> req->entry = inb_ctx_info->npc_mcam_entry;
> - req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI) | BIT(NPC_DMAC);
> + req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI);
> req->intf = NIX_INTF_RX;
> req->index = pfvf->ipsec.inb_ipsec_rq;
> req->match_id = 0xfeed;
> req->channel = pfvf->hw.rx_chan_base;
> req->op = NIX_RX_ACTIONOP_UCAST_IPSEC;
> req->set_cntr = 1;
> - req->packet.spi = x->id.spi;
> + req->packet.spi = inb_ctx_info->spi;
I think this should be:
req->packet.spi = cpu_to_be32(inb_ctx_info->spi);
Flagged by Sparse.
Please also take a look at other Sparse warnings added by this patch (set).
> req->mask.spi = 0xffffffff;
>
> /* Send message to AF */
...
> +static int cn10k_inb_write_sa(struct otx2_nic *pf,
> + struct xfrm_state *x,
> + struct cn10k_inb_sw_ctx_info *inb_ctx_info)
> +{
> + dma_addr_t res_iova, dptr_iova, sa_iova;
> + struct cn10k_rx_sa_s *sa_dptr, *sa_cptr;
> + struct cpt_inst_s inst;
> + u32 sa_size, off;
> + struct cpt_res_s *res;
> + u64 reg_val;
> + int ret;
> +
> + res = dma_alloc_coherent(pf->dev, sizeof(struct cpt_res_s),
> + &res_iova, GFP_ATOMIC);
> + if (!res)
> + return -ENOMEM;
> +
> + sa_cptr = inb_ctx_info->sa_entry;
> + sa_iova = inb_ctx_info->sa_iova;
> + sa_size = sizeof(struct cn10k_rx_sa_s);
> +
> + sa_dptr = dma_alloc_coherent(pf->dev, sa_size, &dptr_iova, GFP_ATOMIC);
> + if (!sa_dptr) {
> + dma_free_coherent(pf->dev, sizeof(struct cpt_res_s), res,
> + res_iova);
> + return -ENOMEM;
> + }
> +
> + for (off = 0; off < (sa_size / 8); off++)
> + *((u64 *)sa_dptr + off) = cpu_to_be64(*((u64 *)sa_cptr + off));
> +
> + memset(&inst, 0, sizeof(struct cpt_inst_s));
> +
> + res->compcode = 0;
> + inst.res_addr = res_iova;
> + inst.dptr = (u64)dptr_iova;
> + inst.param2 = sa_size >> 3;
> + inst.dlen = sa_size;
> + inst.opcode_major = CN10K_IPSEC_MAJOR_OP_WRITE_SA;
> + inst.opcode_minor = CN10K_IPSEC_MINOR_OP_WRITE_SA;
> + inst.cptr = sa_iova;
> + inst.ctx_val = 1;
> + inst.egrp = CN10K_DEF_CPT_IPSEC_EGRP;
> +
> + /* Re-use Outbound CPT LF to install Ingress SAs as well because
> + * the driver does not own the ingress CPT LF.
> + */
> + pf->ipsec.io_addr = (__force u64)otx2_get_regaddr(pf, CN10K_CPT_LF_NQX(0));
I suspect this indicates that io_addr should have an __iomem annotation.
And users should be updated accordingly.
> + cn10k_cpt_inst_flush(pf, &inst, sizeof(struct cpt_inst_s));
> + dmb(sy);
> +
> + ret = cn10k_wait_for_cpt_respose(pf, res);
> + if (ret)
> + goto out;
> +
> + /* Trigger CTX flush to write dirty data back to DRAM */
> + reg_val = FIELD_PREP(GENMASK_ULL(45, 0), sa_iova >> 7);
> + otx2_write64(pf, CN10K_CPT_LF_CTX_FLUSH, reg_val);
> +
> +out:
> + dma_free_coherent(pf->dev, sa_size, sa_dptr, dptr_iova);
> + dma_free_coherent(pf->dev, sizeof(struct cpt_res_s), res, res_iova);
> + return ret;
> +}
> +
> +static void cn10k_xfrm_inb_prepare_sa(struct otx2_nic *pf, struct xfrm_state *x,
> + struct cn10k_inb_sw_ctx_info *inb_ctx_info)
> +{
> + struct cn10k_rx_sa_s *sa_entry = inb_ctx_info->sa_entry;
> + int key_len = (x->aead->alg_key_len + 7) / 8;
> + u8 *key = x->aead->alg_key;
> + u32 sa_size = sizeof(struct cn10k_rx_sa_s);
> + u64 *tmp_key;
> + u32 *tmp_salt;
> + int idx;
> +
> + memset(sa_entry, 0, sizeof(struct cn10k_rx_sa_s));
> +
> + /* Disable ESN for now */
> + sa_entry->esn_en = 0;
> +
> + /* HW context offset is word-31 */
> + sa_entry->hw_ctx_off = 31;
> + sa_entry->pkind = NPC_RX_CPT_HDR_PKIND;
> + sa_entry->eth_ovrwr = 1;
> + sa_entry->pkt_output = 1;
> + sa_entry->pkt_format = 1;
> + sa_entry->orig_pkt_free = 0;
> + /* context push size is up to word 31 */
> + sa_entry->ctx_push_size = 31 + 1;
> + /* context size, 128 Byte aligned up */
> + sa_entry->ctx_size = (sa_size / OTX2_ALIGN) & 0xF;
> +
> + sa_entry->cookie = inb_ctx_info->sa_index;
> +
> + /* 1 word (??) prepanded to context header size */
> + sa_entry->ctx_hdr_size = 1;
> + /* Mark SA entry valid */
> + sa_entry->aop_valid = 1;
> +
> + sa_entry->sa_dir = 0; /* Inbound */
> + sa_entry->ipsec_protocol = 1; /* ESP */
> + /* Default to Transport Mode */
> + if (x->props.mode == XFRM_MODE_TUNNEL)
> + sa_entry->ipsec_mode = 1; /* Tunnel Mode */
> +
> + sa_entry->et_ovrwr_ddr_en = 1;
> + sa_entry->enc_type = 5; /* AES-GCM only */
> + sa_entry->aes_key_len = 1; /* AES key length 128 */
> + sa_entry->l2_l3_hdr_on_error = 1;
> + sa_entry->spi = cpu_to_be32(x->id.spi);
> +
> + /* Last 4 bytes are salt */
> + key_len -= 4;
> + memcpy(sa_entry->cipher_key, key, key_len);
> + tmp_key = (u64 *)sa_entry->cipher_key;
> +
> + for (idx = 0; idx < key_len / 8; idx++)
> + tmp_key[idx] = be64_to_cpu(tmp_key[idx]);
> +
> + memcpy(&sa_entry->iv_gcm_salt, key + key_len, 4);
> + tmp_salt = (u32 *)&sa_entry->iv_gcm_salt;
> + *tmp_salt = be32_to_cpu(*tmp_salt);
Maybe I messed it up, but this seems clearer to me:
void *key = x->aead->alg_key;
...
sa_entry->iv_gcm_salt = be32_to_cpup(key + key_len);
> +
> + /* Write SA context data to memory before enabling */
> + wmb();
> +
> + /* Enable SA */
> + sa_entry->sa_valid = 1;
> +}
> +
> static int cn10k_ipsec_get_hw_ctx_offset(void)
> {
> /* Offset on Hardware-context offset in word */
...
> @@ -1316,8 +1450,96 @@ static int cn10k_ipsec_validate_state(struct xfrm_state *x,
> static int cn10k_ipsec_inb_add_state(struct xfrm_state *x,
> struct netlink_ext_ack *extack)
> {
...
> + netdev_dbg(netdev, "inb_ctx_info: sa_index:%d spi:0x%x mcam_entry:%d"
> + " hash_index:0x%x way:0x%x\n",
Please don't split strings. It makes searching for them more difficult.
This is an exception to the 80 column line length rule.
Although you may want to consider making the string shorter.
...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 02/15] octeontx2-af: Configure crypto hardware for inline ipsec
2025-05-06 20:24 ` Simon Horman
@ 2025-05-08 10:56 ` Bharat Bhushan
0 siblings, 0 replies; 43+ messages in thread
From: Bharat Bhushan @ 2025-05-08 10:56 UTC (permalink / raw)
To: Simon Horman
Cc: Tanmay Jagdale, bbrezillon, arno, schalla, herbert, davem,
sgoutham, lcherian, gakula, jerinj, hkelam, sbhatta,
andrew+netdev, edumazet, kuba, pabeni, bbhushan2, bhelgaas,
pstanner, gregkh, peterz, linux, krzysztof.kozlowski,
giovanni.cabiddu, linux-crypto, linux-kernel, netdev, rkannoth,
sumang, gcherian
On Wed, May 7, 2025 at 2:20 AM Simon Horman <horms@kernel.org> wrote:
>
> On Fri, May 02, 2025 at 06:49:43PM +0530, Tanmay Jagdale wrote:
> > From: Bharat Bhushan <bbhushan2@marvell.com>
> >
> > Currently cpt_rx_inline_lf_cfg mailbox is handled by CPT PF
> > driver to configures inbound inline ipsec. Ideally inbound
> > inline ipsec configuration should be done by AF driver.
> >
> > This patch adds support to allocate, attach and initialize
> > a cptlf from AF. It also configures NIX to send CPT instruction
> > if the packet needs inline ipsec processing and configures
> > CPT LF to handle inline inbound instruction received from NIX.
> >
> > Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> > Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
>
> Hi Bharat and Tanmay,
>
> Some minor feedback from my side.
Hi Simon,
Most of the comments are ack. Please see inline
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> > index 973ff5cf1a7d..8540a04a92f9 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> > @@ -1950,6 +1950,20 @@ enum otx2_cpt_eng_type {
> > OTX2_CPT_MAX_ENG_TYPES,
> > };
> >
> > +struct cpt_rx_inline_lf_cfg_msg {
> > + struct mbox_msghdr hdr;
> > + u16 sso_pf_func;
> > + u16 param1;
> > + u16 param2;
> > + u16 opcode;
> > + u32 credit;
> > + u32 credit_th;
> > + u16 bpid;
>
> On arm64 (at least) there will be a 2 byte hole here. Is that intended?
It is not intentional, will mark as reserved.
>
> And, not strictly related to this patch, struct mboxhdr also has
> a 2 byte hole before it's rc member. Perhaps would be nice
> if it was it filled by a reserved member?
struct mbox_msghdr is not used globally, will prefer not to touch that
as part of this patch series.
>
> > + u32 reserved;
> > + u8 ctx_ilen_valid : 1;
> > + u8 ctx_ilen : 7;
> > +};
> > +
> > struct cpt_set_egrp_num {
> > struct mbox_msghdr hdr;
> > bool set;
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> > index fa403da555ff..6923fd756b19 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> > @@ -525,8 +525,38 @@ struct rvu_cpt_eng_grp {
> > u8 grp_num;
> > };
> >
> > +struct rvu_cpt_rx_inline_lf_cfg {
> > + u16 sso_pf_func;
> > + u16 param1;
> > + u16 param2;
> > + u16 opcode;
> > + u32 credit;
> > + u32 credit_th;
> > + u16 bpid;
>
> FWIIW, there is a hole here too.
ACK, will mark reserved.
>
> > + u32 reserved;
> > + u8 ctx_ilen_valid : 1;
> > + u8 ctx_ilen : 7;
> > +};
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
>
> ...
>
> > @@ -1087,6 +1115,72 @@ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr)
> > #define DQPTR GENMASK_ULL(19, 0)
> > #define NQPTR GENMASK_ULL(51, 32)
> >
> > +static void cpt_rx_ipsec_lf_enable_iqueue(struct rvu *rvu, int blkaddr,
> > + int slot)
> > +{
> > + u64 val;
> > +
> > + /* Set Execution Enable of instruction queue */
> > + val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG);
> > + val |= BIT_ULL(16);
>
> Bit 16 seems to have a meaning, it would be nice if a #define was used
> I mean something like this (but probably not actually this :)
>
> #define CPT_LF_INPROG_ENA_QUEUE BIT_ULL(16)
>
> Perhaps defined near where CPT_LF_INPROG is defined.
ACK
>
> > + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG, val);
> > +
> > + /* Set iqueue's enqueuing */
> > + val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL);
> > + val |= BIT_ULL(0);
>
> Ditto.
ACK
>
> > + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL, val);
> > +}
> > +
> > +static void cpt_rx_ipsec_lf_disable_iqueue(struct rvu *rvu, int blkaddr,
> > + int slot)
> > +{
> > + int timeout = 1000000;
> > + u64 inprog, inst_ptr;
> > + u64 qsize, pending;
> > + int i = 0;
> > +
> > + /* Disable instructions enqueuing */
> > + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL, 0x0);
> > +
> > + inprog = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG);
> > + inprog |= BIT_ULL(16);
> > + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG, inprog);
> > +
> > + qsize = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_SIZE)
> > + & 0x7FFF;
> > + do {
> > + inst_ptr = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot,
> > + CPT_LF_Q_INST_PTR);
> > + pending = (FIELD_GET(XQ_XOR, inst_ptr) * qsize * 40) +
> > + FIELD_GET(NQPTR, inst_ptr) -
> > + FIELD_GET(DQPTR, inst_ptr);
>
> nit: I don't think you need the outer parentheses here.
> But if you do, the two lines above sould be indented by one more
> character.
>
> > + udelay(1);
> > + timeout--;
> > + } while ((pending != 0) && (timeout != 0));
>
> nit: I don't think you need the inner parenthese here (x2).
okay,
>
> > +
> > + if (timeout == 0)
> > + dev_warn(rvu->dev, "TIMEOUT: CPT poll on pending instructions\n");
> > +
> > + timeout = 1000000;
> > + /* Wait for CPT queue to become execution-quiescent */
> > + do {
> > + inprog = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot,
> > + CPT_LF_INPROG);
> > + if ((FIELD_GET(INFLIGHT, inprog) == 0) &&
> > + (FIELD_GET(GRB_CNT, inprog) == 0)) {
> > + i++;
> > + } else {
> > + i = 0;
> > + timeout--;
> > + }
> > + } while ((timeout != 0) && (i < 10));
> > +
> > + if (timeout == 0)
> > + dev_warn(rvu->dev, "TIMEOUT: CPT poll on inflight count\n");
> > + /* Wait for 2 us to flush all queue writes to memory */
> > + udelay(2);
> > +}
> > +
> > static void cpt_lf_disable_iqueue(struct rvu *rvu, int blkaddr, int slot)
> > {
> > int timeout = 1000000;
> > @@ -1310,6 +1404,474 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc)
> > return 0;
> > }
> >
> > +static irqreturn_t rvu_cpt_rx_ipsec_misc_intr_handler(int irq, void *ptr)
> > +{
> > + struct rvu_block *block = ptr;
> > + struct rvu *rvu = block->rvu;
> > + int blkaddr = block->addr;
> > + struct device *dev = rvu->dev;
> > + int slot = 0;
> > + u64 val;
> > +
> > + val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_MISC_INT);
> > +
> > + if (val & (1 << 6)) {
>
> Allong the lines of my earlier comment, bit 6 seems to have a meaning too.
> Likewise for other bits below.
ack
>
> > + dev_err(dev, "Memory error detected while executing CPT_INST_S, LF %d.\n",
> > + slot);
> > + } else if (val & (1 << 5)) {
> > + dev_err(dev, "HW error from an engine executing CPT_INST_S, LF %d.",
> > + slot);
> > + } else if (val & (1 << 3)) {
> > + dev_err(dev, "SMMU fault while writing CPT_RES_S to CPT_INST_S[RES_ADDR], LF %d.\n",
> > + slot);
> > + } else if (val & (1 << 2)) {
> > + dev_err(dev, "Memory error when accessing instruction memory queue CPT_LF_Q_BASE[ADDR].\n");
> > + } else if (val & (1 << 1)) {
> > + dev_err(dev, "Error enqueuing an instruction received at CPT_LF_NQ.\n");
> > + } else {
> > + dev_err(dev, "Unhandled interrupt in CPT LF %d\n", slot);
> > + return IRQ_NONE;
> > + }
> > +
> > + /* Acknowledge interrupts */
> > + otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_MISC_INT,
> > + val & CPT_LF_MISC_INT_MASK);
> > +
> > + return IRQ_HANDLED;
> > +}
>
> ...
>
> > +/* Allocate memory for CPT outbound Instruction queue.
> > + * Instruction queue memory format is:
> > + * -----------------------------
> > + * | Instruction Group memory |
> > + * | (CPT_LF_Q_SIZE[SIZE_DIV40] |
> > + * | x 16 Bytes) |
> > + * | |
> > + * ----------------------------- <-- CPT_LF_Q_BASE[ADDR]
> > + * | Flow Control (128 Bytes) |
> > + * | |
> > + * -----------------------------
> > + * | Instruction Memory |
> > + * | (CPT_LF_Q_SIZE[SIZE_DIV40] |
> > + * | × 40 × 64 bytes) |
> > + * | |
> > + * -----------------------------
> > + */
>
> Nice diagram :)
:), somehow the line alignment does not look good here over email. But
looks good when patch applied. Will see how i can fix this
>
> ...
>
> > +static int rvu_rx_cpt_set_grp_pri_ilen(struct rvu *rvu, int blkaddr, int cptlf)
> > +{
> > + u64 reg_val;
> > +
> > + reg_val = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf));
> > + /* Set High priority */
> > + reg_val |= 1;
> > + /* Set engine group */
> > + reg_val |= ((1ULL << rvu->rvu_cpt.inline_ipsec_egrp) << 48);
> > + /* Set ilen if valid */
> > + if (rvu->rvu_cpt.rx_cfg.ctx_ilen_valid)
> > + reg_val |= rvu->rvu_cpt.rx_cfg.ctx_ilen << 17;
>
> Along the same lines. 48 and 17 seem to have meaning.
> Perhaps define appropriate masks created using GENMASK_ULL
> and use FIELD_PREP?
ack
>
> > +
> > + rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), reg_val);
> > + return 0;
> > +}
>
> ...
>
> > +static void rvu_rx_cptlf_cleanup(struct rvu *rvu, int blkaddr, int slot)
> > +{
> > + /* IRQ cleanup */
> > + rvu_cpt_rx_inline_cleanup_irq(rvu, blkaddr, slot);
> > +
> > + /* CPTLF cleanup */
> > + rvu_cpt_rx_inline_cptlf_clean(rvu, blkaddr, slot);
> > +}
> > +
> > +int rvu_mbox_handler_cpt_rx_inline_lf_cfg(struct rvu *rvu,
> > + struct cpt_rx_inline_lf_cfg_msg *req,
> > + struct msg_rsp *rsp)
>
> Compilers warn that rvu_mbox_handler_cpt_rx_inline_lf_cfg doesn't have
> a prototype.
>
> I think this can be resolved by squashing the following hunk,
> which appears in a subsequent patch in this series, into this patch.
>
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> index 8540a04a92f9..ad74a27888da 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> @@ -213,6 +213,8 @@ M(CPT_FLT_ENG_INFO, 0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req, \
> cpt_flt_eng_info_rsp) \
> M(CPT_SET_ENG_GRP_NUM, 0xA0A, cpt_set_eng_grp_num, cpt_set_egrp_num, \
> msg_rsp) \
> +M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, cpt_rx_inline_lf_cfg_msg, \
> + msg_rsp) \
> /* SDP mbox IDs (range 0x1000 - 0x11FF) */ \
> M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \
> M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \
ack
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
>
> ...
>
> > +/* CPT instruction queue length in bytes */
> > +#define RVU_CPT_INST_QLEN_BYTES \
> > + ((RVU_CPT_SIZE_DIV40 * 40 * RVU_CPT_INST_SIZE) + \
> > + RVU_CPT_INST_QLEN_EXTRA_BYTES)
>
> nit: I think the line above should be indented by one more character
Somehow this looks good when this patch applied, I need to see why
indentation got broken in email.
>
> > +
> > +/* CPT instruction group queue length in bytes */
> > +#define RVU_CPT_INST_GRP_QLEN_BYTES \
> > + ((RVU_CPT_SIZE_DIV40 + RVU_CPT_EXTRA_SIZE_DIV40) * 16)
> > +
> > +/* CPT FC length in bytes */
> > +#define RVU_CPT_Q_FC_LEN 128
> > +
> > +/* CPT LF_Q_SIZE Register */
> > +#define CPT_LF_Q_SIZE_DIV40 GENMASK_ULL(14, 0)
> > +
> > +/* CPT invalid engine group num */
> > +#define OTX2_CPT_INVALID_CRYPTO_ENG_GRP 0xFF
> > +
> > +/* Fastpath ipsec opcode with inplace processing */
> > +#define OTX2_CPT_INLINE_RX_OPCODE (0x26 | (1 << 6))
> > +#define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6))
>
> Along the lines of earlier comments, bit 6 seems to have a meaning here.
ack
>
> > +
> > +/* Calculate CPT register offset */
> > +#define CPT_RVU_FUNC_ADDR_S(blk, slot, offs) \
> > + (((blk) << 20) | ((slot) << 12) | (offs))
>
> And perhaps this is another candidate for GENMASK + FIELD_PREP.
ack.
Thanks
-Bharat
>
> ...
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation
2025-05-03 16:12 ` Kalesh Anakkur Purayil
@ 2025-05-13 5:08 ` Tanmay Jagdale
0 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-13 5:08 UTC (permalink / raw)
To: Kalesh Anakkur Purayil
Cc: brezillon, schalla, herbert, davem, sgoutham, lcherian, gakula,
jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba, pabeni,
bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
linux-crypto, linux-kernel, netdev, rkannoth, sumang, gcherian,
Kiran Kumar K, Nithin Dabilpuram
Hi Kalesh,
On 2025-05-03 at 21:42:01, Kalesh Anakkur Purayil (kalesh-anakkur.purayil@broadcom.com) wrote:
> On Fri, May 2, 2025 at 6:56 PM Tanmay Jagdale <tanmay@marvell.com> wrote:
> >
> > From: Kiran Kumar K <kirankumark@marvell.com>
> >
> > In case of IPsec, the inbound SPI can be random. HW supports mapping
> > SPI to an arbitrary SA index. SPI to SA index is done using a lookup
> > in NPC cam entry with key as SPI, MATCH_ID, LFID. Adding Mbox API
> > changes to configure the match table.
> >
> > Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
> > Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
> > ---
> > .../ethernet/marvell/octeontx2/af/Makefile | 2 +-
> > .../net/ethernet/marvell/octeontx2/af/mbox.h | 27 +++
> > .../net/ethernet/marvell/octeontx2/af/rvu.c | 4 +
> > .../net/ethernet/marvell/octeontx2/af/rvu.h | 13 ++
> > .../ethernet/marvell/octeontx2/af/rvu_nix.c | 6 +
> > .../marvell/octeontx2/af/rvu_nix_spi.c | 220 ++++++++++++++++++
> > .../ethernet/marvell/octeontx2/af/rvu_reg.h | 4 +
> > 7 files changed, 275 insertions(+), 1 deletion(-)
> > create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
> >
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
> > index ccea37847df8..49318017f35f 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
> > @@ -8,7 +8,7 @@ obj-$(CONFIG_OCTEONTX2_MBOX) += rvu_mbox.o
> > obj-$(CONFIG_OCTEONTX2_AF) += rvu_af.o
> >
> > rvu_mbox-y := mbox.o rvu_trace.o
> > -rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \
> > +rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o rvu_nix_spi.o \
> > rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \
> > rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o rvu_switch.o \
> > rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o \
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> > index 715efcc04c9e..5cebf10a15a7 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> > @@ -326,6 +326,10 @@ M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg, \
> > M(NIX_LF_INLINE_RQ_CFG, 0x8024, nix_lf_inline_rq_cfg, \
> > nix_rq_cpt_field_mask_cfg_req, \
> > msg_rsp) \
> > +M(NIX_SPI_TO_SA_ADD, 0x8026, nix_spi_to_sa_add, nix_spi_to_sa_add_req, \
> > + nix_spi_to_sa_add_rsp) \
> > +M(NIX_SPI_TO_SA_DELETE, 0x8027, nix_spi_to_sa_delete, nix_spi_to_sa_delete_req, \
> > + msg_rsp) \
> > M(NIX_MCAST_GRP_CREATE, 0x802b, nix_mcast_grp_create, nix_mcast_grp_create_req, \
> > nix_mcast_grp_create_rsp) \
> > M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req, \
> > @@ -880,6 +884,29 @@ enum nix_rx_vtag0_type {
> > NIX_AF_LFX_RX_VTAG_TYPE7,
> > };
> >
> > +/* For SPI to SA index add */
> > +struct nix_spi_to_sa_add_req {
> > + struct mbox_msghdr hdr;
> > + u32 sa_index;
> > + u32 spi_index;
> > + u16 match_id;
> > + bool valid;
> > +};
> > +
> > +struct nix_spi_to_sa_add_rsp {
> > + struct mbox_msghdr hdr;
> > + u16 hash_index;
> > + u8 way;
> > + u8 is_duplicate;
> > +};
> > +
> > +/* To free SPI to SA index */
> > +struct nix_spi_to_sa_delete_req {
> > + struct mbox_msghdr hdr;
> > + u16 hash_index;
> > + u8 way;
> > +};
> > +
> > /* For NIX LF context alloc and init */
> > struct nix_lf_alloc_req {
> > struct mbox_msghdr hdr;
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
> > index ea346e59835b..2b7c09bb24e1 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
> > @@ -90,6 +90,9 @@ static void rvu_setup_hw_capabilities(struct rvu *rvu)
> >
> > if (is_rvu_npc_hash_extract_en(rvu))
> > hw->cap.npc_hash_extract = true;
> > +
> > + if (is_rvu_nix_spi_to_sa_en(rvu))
> > + hw->cap.spi_to_sas = 0x2000;
> > }
> >
> > /* Poll a RVU block's register 'offset', for a 'zero'
> > @@ -2723,6 +2726,7 @@ static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc)
> > rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NPA);
> > rvu_reset_lmt_map_tbl(rvu, pcifunc);
> > rvu_detach_rsrcs(rvu, NULL, pcifunc);
> > +
> > /* In scenarios where PF/VF drivers detach NIXLF without freeing MCAM
> > * entries, check and free the MCAM entries explicitly to avoid leak.
> > * Since LF is detached use LF number as -1.
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> > index 71407f6318ec..42fc3e762bc0 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
> > @@ -395,6 +395,7 @@ struct hw_cap {
> > u16 nix_txsch_per_cgx_lmac; /* Max Q's transmitting to CGX LMAC */
> > u16 nix_txsch_per_lbk_lmac; /* Max Q's transmitting to LBK LMAC */
> > u16 nix_txsch_per_sdp_lmac; /* Max Q's transmitting to SDP LMAC */
> > + u16 spi_to_sas; /* Num of SPI to SA index */
> > bool nix_fixed_txschq_mapping; /* Schq mapping fixed or flexible */
> > bool nix_shaping; /* Is shaping and coloring supported */
> > bool nix_shaper_toggle_wait; /* Shaping toggle needs poll/wait */
> > @@ -800,6 +801,17 @@ static inline bool is_rvu_npc_hash_extract_en(struct rvu *rvu)
> > return true;
> > }
> >
> > +static inline bool is_rvu_nix_spi_to_sa_en(struct rvu *rvu)
> > +{
> > + u64 nix_const2;
> > +
> > + nix_const2 = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST2);
> > + if ((nix_const2 >> 48) & 0xffff)
> > + return true;
> > +
> > + return false;
> > +}
> > +
> > static inline u16 rvu_nix_chan_cgx(struct rvu *rvu, u8 cgxid,
> > u8 lmacid, u8 chan)
> > {
> > @@ -992,6 +1004,7 @@ int nix_get_struct_ptrs(struct rvu *rvu, u16 pcifunc,
> > struct nix_hw **nix_hw, int *blkaddr);
> > int rvu_nix_setup_ratelimit_aggr(struct rvu *rvu, u16 pcifunc,
> > u16 rq_idx, u16 match_id);
> > +int rvu_nix_free_spi_to_sa_table(struct rvu *rvu, uint16_t pcifunc);
> > int nix_aq_context_read(struct rvu *rvu, struct nix_hw *nix_hw,
> > struct nix_cn10k_aq_enq_req *aq_req,
> > struct nix_cn10k_aq_enq_rsp *aq_rsp,
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> > index b15fd331facf..68525bfc8e6d 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> > @@ -1751,6 +1751,9 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req,
> > else
> > rvu_npc_free_mcam_entries(rvu, pcifunc, nixlf);
> >
> > + /* Reset SPI to SA index table */
> > + rvu_nix_free_spi_to_sa_table(rvu, pcifunc);
> > +
> > /* Free any tx vtag def entries used by this NIX LF */
> > if (!(req->flags & NIX_LF_DONT_FREE_TX_VTAG))
> > nix_free_tx_vtag_entries(rvu, pcifunc);
> > @@ -5312,6 +5315,9 @@ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf)
> > nix_rx_sync(rvu, blkaddr);
> > nix_txschq_free(rvu, pcifunc);
> >
> > + /* Reset SPI to SA index table */
> > + rvu_nix_free_spi_to_sa_table(rvu, pcifunc);
> > +
> > clear_bit(NIXLF_INITIALIZED, &pfvf->flags);
> >
> > if (is_pf_cgxmapped(rvu, pf) && rvu->rep_mode)
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
> > new file mode 100644
> > index 000000000000..b8acc23a47bc
> > --- /dev/null
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
> > @@ -0,0 +1,220 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/* Marvell RVU Admin Function driver
> > + *
> > + * Copyright (C) 2022 Marvell.
> Copyright year 2025?
ACK.
> > + *
> > + */
> > +
> > +#include "rvu.h"
> > +
> > +static bool nix_spi_to_sa_index_check_duplicate(struct rvu *rvu,
> > + struct nix_spi_to_sa_add_req *req,
> > + struct nix_spi_to_sa_add_rsp *rsp,
> > + int blkaddr, int16_t index, u8 way,
> > + bool *is_valid, int lfidx)
> > +{
> > + u32 spi_index;
> > + u16 match_id;
> > + bool valid;
> > + u8 lfid;
> > + u64 wkey;
> Maintain RCT order while declaring variables
ACK.
> > +
> > + wkey = rvu_read64(rvu, blkaddr, NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way));
> > + spi_index = (wkey & 0xFFFFFFFF);
> > + match_id = ((wkey >> 32) & 0xFFFF);
> > + lfid = ((wkey >> 48) & 0x7f);
> > + valid = ((wkey >> 55) & 0x1);
> > +
> > + *is_valid = valid;
> > + if (!valid)
> > + return 0;
> > +
> > + if (req->spi_index == spi_index && req->match_id == match_id &&
> > + lfidx == lfid) {
> > + rsp->hash_index = index;
> > + rsp->way = way;
> > + rsp->is_duplicate = true;
> > + return 1;
> > + }
> > + return 0;
> > +}
> > +
> > +static void nix_spi_to_sa_index_table_update(struct rvu *rvu,
> > + struct nix_spi_to_sa_add_req *req,
> > + struct nix_spi_to_sa_add_rsp *rsp,
> > + int blkaddr, int16_t index, u8 way,
> > + int lfidx)
> > +{
> > + u64 wvalue;
> > + u64 wkey;
> > +
> > + wkey = (req->spi_index | ((u64)req->match_id << 32) |
> > + (((u64)lfidx) << 48) | ((u64)req->valid << 55));
> > + rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way),
> > + wkey);
> > + wvalue = (req->sa_index & 0xFFFFFFFF);
> > + rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_VALUEX_WAYX(index, way),
> > + wvalue);
> > + rsp->hash_index = index;
> > + rsp->way = way;
> > + rsp->is_duplicate = false;
> > +}
> > +
> > +int rvu_mbox_handler_nix_spi_to_sa_delete(struct rvu *rvu,
> > + struct nix_spi_to_sa_delete_req *req,
> > + struct msg_rsp *rsp)
> > +{
> > + struct rvu_hwinfo *hw = rvu->hw;
> > + u16 pcifunc = req->hdr.pcifunc;
> > + int lfidx, lfid;
> > + int blkaddr;
> > + u64 wvalue;
> > + u64 wkey;
> > + int ret = 0;
> > +
> > + if (!hw->cap.spi_to_sas)
> > + return NIX_AF_ERR_PARAM;
> > +
> > + if (!is_nixlf_attached(rvu, pcifunc)) {
> > + ret = NIX_AF_ERR_AF_LF_INVALID;
> > + goto exit;
> there is no need of label here, you can return directly
> > + }
> > +
> > + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
> > + lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
> > + if (lfidx < 0) {
> > + ret = NIX_AF_ERR_AF_LF_INVALID;
> > + goto exit;
> there is no need of label here, you can return directly
Okay, I will get rid of the unnecessary gotos.
> > + }
> > +
> > + mutex_lock(&rvu->rsrc_lock);
> > +
> > + wkey = rvu_read64(rvu, blkaddr,
> > + NIX_AF_SPI_TO_SA_KEYX_WAYX(req->hash_index, req->way));
> > + lfid = ((wkey >> 48) & 0x7f);
> It would be nice if you use macros instead of these hard coded magic
> numbers. Same comment applies to whole patch series.
ACK. I will fix this in the entire patch series.
> > + if (lfid != lfidx) {
> > + ret = NIX_AF_ERR_AF_LF_INVALID;
> > + goto unlock;
> > + }
> > +
> > + wkey = 0;
> > + rvu_write64(rvu, blkaddr,
> > + NIX_AF_SPI_TO_SA_KEYX_WAYX(req->hash_index, req->way), wkey);
> > + wvalue = 0;
> > + rvu_write64(rvu, blkaddr,
> > + NIX_AF_SPI_TO_SA_VALUEX_WAYX(req->hash_index, req->way), wvalue);
> > +unlock:
> > + mutex_unlock(&rvu->rsrc_lock);
> > +exit:
> > + return ret;
> > +}
> > +
> > +int rvu_mbox_handler_nix_spi_to_sa_add(struct rvu *rvu,
> > + struct nix_spi_to_sa_add_req *req,
> > + struct nix_spi_to_sa_add_rsp *rsp)
> > +{
> > + u16 way0_index, way1_index, way2_index, way3_index;
> > + struct rvu_hwinfo *hw = rvu->hw;
> > + u16 pcifunc = req->hdr.pcifunc;
> > + bool way0, way1, way2, way3;
> > + int ret = 0;
> > + int blkaddr;
> > + int lfidx;
> > + u64 value;
> > + u64 key;
> > +
> > + if (!hw->cap.spi_to_sas)
> > + return NIX_AF_ERR_PARAM;
> > +
> > + if (!is_nixlf_attached(rvu, pcifunc)) {
> > + ret = NIX_AF_ERR_AF_LF_INVALID;
> > + goto exit;
> there is no need of label here, you can return directly
> > + }
> > +
> > + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
> > + lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
> > + if (lfidx < 0) {
> > + ret = NIX_AF_ERR_AF_LF_INVALID;
> > + goto exit;
> there is no need of label here, you can return directly
ACK.
> > + }
> > +
> > + mutex_lock(&rvu->rsrc_lock);
> > +
> > + key = (((u64)lfidx << 48) | ((u64)req->match_id << 32) | req->spi_index);
> > + rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_HASH_KEY, key);
> > + value = rvu_read64(rvu, blkaddr, NIX_AF_SPI_TO_SA_HASH_VALUE);
> > + way0_index = (value & 0x7ff);
> > + way1_index = ((value >> 16) & 0x7ff);
> > + way2_index = ((value >> 32) & 0x7ff);
> > + way3_index = ((value >> 48) & 0x7ff);
> > +
> > + /* Check for duplicate entry */
> > + if (nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
> > + way0_index, 0, &way0, lfidx) ||
> > + nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
> > + way1_index, 1, &way1, lfidx) ||
> > + nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
> > + way2_index, 2, &way2, lfidx) ||
> > + nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
> > + way3_index, 3, &way3, lfidx)) {
> > + ret = 0;
> > + goto unlock;
> > + }
> > +
> > + /* If not present, update first available way with index */
> > + if (!way0)
> > + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
> > + way0_index, 0, lfidx);
> > + else if (!way1)
> > + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
> > + way1_index, 1, lfidx);
> > + else if (!way2)
> > + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
> > + way2_index, 2, lfidx);
> > + else if (!way3)
> > + nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
> > + way3_index, 3, lfidx);
> > +unlock:
> > + mutex_unlock(&rvu->rsrc_lock);
> > +exit:
> > + return ret;
> > +}
> > +
> > +int rvu_nix_free_spi_to_sa_table(struct rvu *rvu, uint16_t pcifunc)
> > +{
> > + struct rvu_hwinfo *hw = rvu->hw;
> > + int lfidx, lfid;
> > + int index, way;
> > + u64 value, key;
> Maintain RCT order here
ACK.
> > + int blkaddr;
> > +
> > + if (!hw->cap.spi_to_sas)
> > + return 0;
> > +
> > + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
> > + lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
> > + if (lfidx < 0)
> > + return NIX_AF_ERR_AF_LF_INVALID;
> > +
> > + mutex_lock(&rvu->rsrc_lock);
> > + for (index = 0; index < hw->cap.spi_to_sas / 4; index++) {
> > + for (way = 0; way < 4; way++) {
> > + key = rvu_read64(rvu, blkaddr,
> > + NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way));
> > + lfid = ((key >> 48) & 0x7f);
> > + if (lfid == lfidx) {
> > + key = 0;
> > + rvu_write64(rvu, blkaddr,
> > + NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way),
> > + key);
> > + value = 0;
> > + rvu_write64(rvu, blkaddr,
> > + NIX_AF_SPI_TO_SA_VALUEX_WAYX(index, way),
> > + value);
> > + }
> > + }
> > + }
> > + mutex_unlock(&rvu->rsrc_lock);
> > +
> > + return 0;
> > +}
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> > index e5e005d5d71e..b64547fe4811 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> > @@ -396,6 +396,10 @@
> > #define NIX_AF_RX_CHANX_CFG(a) (0x1A30 | (a) << 15)
> > #define NIX_AF_CINT_TIMERX(a) (0x1A40 | (a) << 18)
> > #define NIX_AF_LSO_FORMATX_FIELDX(a, b) (0x1B00 | (a) << 16 | (b) << 3)
> > +#define NIX_AF_SPI_TO_SA_KEYX_WAYX(a, b) (0x1C00 | (a) << 16 | (b) << 3)
> > +#define NIX_AF_SPI_TO_SA_VALUEX_WAYX(a, b) (0x1C40 | (a) << 16 | (b) << 3)
> > +#define NIX_AF_SPI_TO_SA_HASH_KEY (0x1C90)
> > +#define NIX_AF_SPI_TO_SA_HASH_VALUE (0x1CA0)
> > #define NIX_AF_LFX_CFG(a) (0x4000 | (a) << 17)
> > #define NIX_AF_LFX_SQS_CFG(a) (0x4020 | (a) << 17)
> > #define NIX_AF_LFX_TX_CFG2(a) (0x4028 | (a) << 17)
> > --
> > 2.43.0
> >
> >
>
>
> --
> Regards,
> Kalesh AP
Thanks for the review.
Regards,
Tanmay
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC
2025-05-05 17:52 ` [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Leon Romanovsky
@ 2025-05-13 5:11 ` Tanmay Jagdale
0 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-13 5:11 UTC (permalink / raw)
To: Leon Romanovsky
Cc: brezillon, schalla, herbert, davem, sgoutham, lcherian, gakula,
jerinj, hkelam, sbhatta, andrew+netdev, edumazet, kuba, pabeni,
bbhushan2, bhelgaas, pstanner, gregkh, peterz, linux,
giovanni.cabiddu, linux-crypto, linux-kernel, netdev, rkannoth,
sumang, gcherian
Hi Leon,
On 2025-05-05 at 23:22:32, Leon Romanovsky (leon@kernel.org) wrote:
> On Fri, May 02, 2025 at 06:49:41PM +0530, Tanmay Jagdale wrote:
> > This patch series adds support for inbound inline IPsec flows for the
> > Marvell CN10K SoC.
>
> It will be much easier if in commit messages and comments you
> will use kernel naming, e.g. "IPsec packet offload" and not "inline IPsec", e.t.c.
Okay sure, I will update the patch series with the kernel naming
convention.
>
> Also, I'm wonder, do you have performance numbers for this code?
Sure, I'll share the performance numbers in the next version.
>
> Thanks
Thanks and regards,
Tanmay
>
> >
> > The packet flow
> > ---------------
> > An encrypted IPSec packet goes through two passes in the RVU hardware
> > before reaching the CPU.
> > First Pass:
> > The first pass involves identifying the packet as IPSec, assigning an RQ,
> > allocating a buffer from the Aura pool and then send it to CPT for decryption.
> >
> > Second Pass:
> > After CPT decrypts the packet, it sends a metapacket to NIXRX via the X2P
> > bus. The metapacket contains CPT_PARSE_HDR_S structure and some initial
> > bytes of the decrypted packet which would help NIXRX in classification.
> > CPT also sets BIT(11) of channel number to further help in identifcation.
> > NIXRX allocates a new buffer for this packet and submits it to the CPU.
> >
> > Once the decrypted metapacket packet is delivered to the CPU, get the WQE
> > pointer from CPT_PARSE_HDR_S in the packet buffer. This WQE points to the
> > complete decrypted packet. We create an skb using this, set the relevant
> > XFRM packet mode flags to indicate successful decryption, and submit it
> > to the network stack.
> >
> >
> > Patches are grouped as follows:
> > -------------------------------
> > 1) CPT LF movement from crypto driver to RVU AF
> > 0001-crypto-octeontx2-Share-engine-group-info-with-AF-dri.patch
> > 0002-octeontx2-af-Configure-crypto-hardware-for-inline-ip.patch
> > 0003-octeontx2-af-Setup-Large-Memory-Transaction-for-cryp.patch
> > 0004-octeontx2-af-Handle-inbound-inline-ipsec-config-in-A.patch
> > 0005-crypto-octeontx2-Remove-inbound-inline-ipsec-config.patch
> >
> > 2) RVU AF Mailbox changes for CPT 2nd pass RQ mask, SPI-to-SA table,
> > NIX-CPT BPID configuration
> > 0006-octeontx2-af-Add-support-for-CPT-second-pass.patch
> > 0007-octeontx2-af-Add-support-for-SPI-to-SA-index-transla.patch
> > 0008-octeontx2-af-Add-mbox-to-alloc-free-BPIDs.patch
> >
> > 3) Inbound Inline IPsec support patches
> > 0009-octeontx2-pf-ipsec-Allocate-Ingress-SA-table.patch
> > 0010-octeontx2-pf-ipsec-Setup-NIX-HW-resources-for-inboun.patch
> > 0011-octeontx2-pf-ipsec-Handle-NPA-threshhold-interrupt.patch
> > 0012-octeontx2-pf-ipsec-Initialize-ingress-IPsec.patch
> > 0013-octeontx2-pf-ipsec-Manage-NPC-rules-and-SPI-to-SA-ta.patch
> > 0014-octeontx2-pf-ipsec-Process-CPT-metapackets.patch
> > 0015-octeontx2-pf-ipsec-Add-XFRM-state-and-policy-hooks-f.patch
> >
> >
> > Bharat Bhushan (5):
> > crypto: octeontx2: Share engine group info with AF driver
> > octeontx2-af: Configure crypto hardware for inline ipsec
> > octeontx2-af: Setup Large Memory Transaction for crypto
> > octeontx2-af: Handle inbound inline ipsec config in AF
> > crypto: octeontx2: Remove inbound inline ipsec config
> >
> > Geetha sowjanya (1):
> > octeontx2-af: Add mbox to alloc/free BPIDs
> >
> > Kiran Kumar K (1):
> > octeontx2-af: Add support for SPI to SA index translation
> >
> > Rakesh Kudurumalla (1):
> > octeontx2-af: Add support for CPT second pass
> >
> > Tanmay Jagdale (7):
> > octeontx2-pf: ipsec: Allocate Ingress SA table
> > octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows
> > octeontx2-pf: ipsec: Handle NPA threshold interrupt
> > octeontx2-pf: ipsec: Initialize ingress IPsec
> > octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries
> > octeontx2-pf: ipsec: Process CPT metapackets
> > octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows
> >
> > .../marvell/octeontx2/otx2_cpt_common.h | 8 -
> > drivers/crypto/marvell/octeontx2/otx2_cptpf.h | 10 -
> > .../marvell/octeontx2/otx2_cptpf_main.c | 50 +-
> > .../marvell/octeontx2/otx2_cptpf_mbox.c | 286 +---
> > .../marvell/octeontx2/otx2_cptpf_ucode.c | 116 +-
> > .../marvell/octeontx2/otx2_cptpf_ucode.h | 3 +-
> > .../ethernet/marvell/octeontx2/af/Makefile | 2 +-
> > .../ethernet/marvell/octeontx2/af/common.h | 1 +
> > .../net/ethernet/marvell/octeontx2/af/mbox.h | 119 +-
> > .../net/ethernet/marvell/octeontx2/af/rvu.c | 9 +-
> > .../net/ethernet/marvell/octeontx2/af/rvu.h | 71 +
> > .../ethernet/marvell/octeontx2/af/rvu_cn10k.c | 11 +
> > .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 706 +++++++++-
> > .../ethernet/marvell/octeontx2/af/rvu_cpt.h | 71 +
> > .../ethernet/marvell/octeontx2/af/rvu_nix.c | 230 +++-
> > .../marvell/octeontx2/af/rvu_nix_spi.c | 220 +++
> > .../ethernet/marvell/octeontx2/af/rvu_reg.h | 16 +
> > .../marvell/octeontx2/af/rvu_struct.h | 4 +-
> > .../marvell/octeontx2/nic/cn10k_ipsec.c | 1191 ++++++++++++++++-
> > .../marvell/octeontx2/nic/cn10k_ipsec.h | 152 +++
> > .../marvell/octeontx2/nic/otx2_common.c | 23 +-
> > .../marvell/octeontx2/nic/otx2_common.h | 16 +
> > .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 17 +
> > .../marvell/octeontx2/nic/otx2_struct.h | 16 +
> > .../marvell/octeontx2/nic/otx2_txrx.c | 25 +-
> > .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 4 +
> > 26 files changed, 2915 insertions(+), 462 deletions(-)
> > create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
> > create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
> >
> > --
> > 2.43.0
> >
> >
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass
2025-05-07 12:36 ` Simon Horman
@ 2025-05-13 5:18 ` Tanmay Jagdale
0 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-13 5:18 UTC (permalink / raw)
To: Simon Horman
Cc: schalla, herbert, davem, sgoutham, lcherian, gakula, jerinj,
hkelam, sbhatta, andrew+netdev, edumazet, kuba, pabeni, bbhushan2,
bhelgaas, pstanner, gregkh, peterz, linux, linux-crypto,
linux-kernel, netdev, rkannoth, sumang, gcherian,
Rakesh Kudurumalla
Hi Simon,
On 2025-05-07 at 18:06:22, Simon Horman (horms@kernel.org) wrote:
> On Fri, May 02, 2025 at 06:49:47PM +0530, Tanmay Jagdale wrote:
> > From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
> >
> > Implemented mailbox to add mechanism to allocate a
> > rq_mask and apply to nixlf to toggle RQ context fields
> > for CPT second pass packets.
> >
> > Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
> > Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
> > index 7fa98aeb3663..18e2a48e2de1 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
> > @@ -544,6 +544,7 @@ void rvu_program_channels(struct rvu *rvu)
> >
> > void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw)
> > {
> > + struct rvu_hwinfo *hw = rvu->hw;
> > int blkaddr = nix_hw->blkaddr;
> > u64 cfg;
> >
> > @@ -558,6 +559,16 @@ void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw)
> > cfg = rvu_read64(rvu, blkaddr, NIX_AF_CFG);
> > cfg |= BIT_ULL(1) | BIT_ULL(2);
>
> As per my comments on an earlier patch in this series:
> bits 1 and 2 have meaning. It would be nice to use a #define to
> convey this meaning to the reader.
Okay sure, I will update the patch series with macros that provide a
clear meaning.
>
> > rvu_write64(rvu, blkaddr, NIX_AF_CFG, cfg);
> > +
> > + cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
> > +
> > + if (!(cfg & BIT_ULL(62))) {
> > + hw->cap.second_cpt_pass = false;
> > + return;
> > + }
> > +
> > + hw->cap.second_cpt_pass = true;
> > + nix_hw->rq_msk.total = NIX_RQ_MSK_PROFILES;
> > }
> >
> > void rvu_apr_block_cn10k_init(struct rvu *rvu)
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> > index 6bd995c45dad..b15fd331facf 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
> > @@ -6612,3 +6612,123 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
> >
> > return ret;
> > }
> > +
> > +static inline void
> > +configure_rq_mask(struct rvu *rvu, int blkaddr, int nixlf,
> > + u8 rq_mask, bool enable)
> > +{
> > + u64 cfg, reg;
> > +
> > + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf));
> > + reg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_CFG(nixlf));
> > + if (enable) {
> > + cfg |= BIT_ULL(43);
> > + reg = (reg & ~GENMASK_ULL(36, 35)) | ((u64)rq_mask << 35);
> > + } else {
> > + cfg &= ~BIT_ULL(43);
> > + reg = (reg & ~GENMASK_ULL(36, 35));
> > + }
>
> Likewise for the bit, mask, and shift here.
>
> And I think that using FIELD_PREP with another mask in place of the shift
> is also appropriate here.
ACK.
>
> > + rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf), cfg);
> > + rvu_write64(rvu, blkaddr, NIX_AF_LFX_CFG(nixlf), reg);
> > +}
> > +
> > +static inline void
> > +configure_spb_cpt(struct rvu *rvu, int blkaddr, int nixlf,
> > + struct nix_rq_cpt_field_mask_cfg_req *req, bool enable)
> > +{
> > + u64 cfg;
> > +
> > + cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf));
> > + if (enable) {
> > + cfg |= BIT_ULL(37);
> > + cfg &= ~GENMASK_ULL(42, 38);
> > + cfg |= ((u64)req->ipsec_cfg1.spb_cpt_sizem1 << 38);
> > + cfg &= ~GENMASK_ULL(63, 44);
> > + cfg |= ((u64)req->ipsec_cfg1.spb_cpt_aura << 44);
> > + } else {
> > + cfg &= ~BIT_ULL(37);
> > + cfg &= ~GENMASK_ULL(42, 38);
> > + cfg &= ~GENMASK_ULL(63, 44);
> > + }
>
> And here too.
>
> > + rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf), cfg);
> > +}
>
> ...
>
> > +int rvu_mbox_handler_nix_lf_inline_rq_cfg(struct rvu *rvu,
> > + struct nix_rq_cpt_field_mask_cfg_req *req,
> > + struct msg_rsp *rsp)
>
> It would be nice to reduce this to 80 columns wide or less.
> Perhaps like this?
>
> int
> rvu_mbox_handler_nix_lf_inline_rq_cfg(struct rvu *rvu,
> struct nix_rq_cpt_field_mask_cfg_req *req,
> struct msg_rsp *rsp)
>
> Or perhaps by renaming nix_rq_cpt_field_mask_cfg_req to be shorter.
Okay sure. I'll go ahead with the first suggestion so that the function
name is in sync with the rest of the file.
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> > index 245e69fcbff9..e5e005d5d71e 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
> > @@ -433,6 +433,8 @@
> > #define NIX_AF_MDQX_IN_MD_COUNT(a) (0x14e0 | (a) << 16)
> > #define NIX_AF_SMQX_STATUS(a) (0x730 | (a) << 16)
> > #define NIX_AF_MDQX_OUT_MD_COUNT(a) (0xdb0 | (a) << 16)
> > +#define NIX_AF_RX_RQX_MASKX(a, b) (0x4A40 | (a) << 16 | (b) << 3)
> > +#define NIX_AF_RX_RQX_SETX(a, b) (0x4A80 | (a) << 16 | (b) << 3)
>
> FIELD_PREP could be used here in conjunction with #defines
> for appropriate masks here too.
ACK.
>
> >
> > #define NIX_PRIV_AF_INT_CFG (0x8000000)
> > #define NIX_PRIV_LFX_CFG (0x8000010)
>
> ...
Thanks,
Tanmay
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 04/15] octeontx2-af: Handle inbound inline ipsec config in AF
2025-05-07 9:28 ` Simon Horman
@ 2025-05-13 6:08 ` Tanmay Jagdale
0 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-13 6:08 UTC (permalink / raw)
To: Simon Horman
Cc: herbert, davem, sgoutham, lcherian, gakula, jerinj, hkelam,
sbhatta, andrew+netdev, edumazet, kuba, pabeni, bbhushan2,
bhelgaas, pstanner, gregkh, peterz, linux, linux-crypto,
linux-kernel, netdev, rkannoth, sumang, gcherian
Hi Simon,
On 2025-05-07 at 14:58:32, Simon Horman (horms@kernel.org) wrote:
> On Wed, May 07, 2025 at 10:19:18AM +0100, Simon Horman wrote:
> > On Fri, May 02, 2025 at 06:49:45PM +0530, Tanmay Jagdale wrote:
> > > From: Bharat Bhushan <bbhushan2@marvell.com>
>
> ...
>
> > > diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
> > > index 5e6f70ac35a7..222419bd5ac9 100644
> > > --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
> > > +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
> > > @@ -326,9 +326,6 @@ static int cptpf_handle_vf_req(struct otx2_cptpf_dev *cptpf,
> > > case MBOX_MSG_GET_KVF_LIMITS:
> > > err = handle_msg_kvf_limits(cptpf, vf, req);
> > > break;
> > > - case MBOX_MSG_RX_INLINE_IPSEC_LF_CFG:
> > > - err = handle_msg_rx_inline_ipsec_lf_cfg(cptpf, req);
> > > - break;
> > >
> > > default:
> > > err = forward_to_af(cptpf, vf, req, size);
> >
> > This removes the only caller of handle_msg_rx_inline_ipsec_lf_cfg()
> > Which in turn removes the only caller of rx_inline_ipsec_lf_cfg(),
> > and in turn send_inline_ipsec_inbound_msg().
> >
> > Those functions should be removed by the same patch that makes the changes
> > above. Which I think could be split into a separate patch from the changes
> > below.
>
> Sorry for not noticing before I sent my previous email,
> but I now see that those functions are removed by the following patch.
> But I do think this needs to be re-arranged a bit to avoid regressions
> wrt W=1 builds.
Yes, I agree. Will rearrange the code blocks in the next version.
Thanks,
Tanmay
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation
2025-05-07 12:45 ` Simon Horman
@ 2025-05-13 6:12 ` Tanmay Jagdale
0 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-13 6:12 UTC (permalink / raw)
To: Simon Horman
Cc: herbert, davem, sgoutham, lcherian, gakula, jerinj, hkelam,
sbhatta, andrew+netdev, edumazet, kuba, pabeni, bbhushan2,
bhelgaas, pstanner, gregkh, peterz, linux, linux-crypto,
linux-kernel, netdev, rkannoth, sumang, gcherian, Kiran Kumar K,
Nithin Dabilpuram
On 2025-05-07 at 18:15:17, Simon Horman (horms@kernel.org) wrote:
> On Fri, May 02, 2025 at 06:49:48PM +0530, Tanmay Jagdale wrote:
> > From: Kiran Kumar K <kirankumark@marvell.com>
> >
> > In case of IPsec, the inbound SPI can be random. HW supports mapping
> > SPI to an arbitrary SA index. SPI to SA index is done using a lookup
> > in NPC cam entry with key as SPI, MATCH_ID, LFID. Adding Mbox API
> > changes to configure the match table.
> >
> > Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
> > Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> > index 715efcc04c9e..5cebf10a15a7 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
> > @@ -326,6 +326,10 @@ M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg, \
> > M(NIX_LF_INLINE_RQ_CFG, 0x8024, nix_lf_inline_rq_cfg, \
> > nix_rq_cpt_field_mask_cfg_req, \
> > msg_rsp) \
> > +M(NIX_SPI_TO_SA_ADD, 0x8026, nix_spi_to_sa_add, nix_spi_to_sa_add_req, \
> > + nix_spi_to_sa_add_rsp) \
> > +M(NIX_SPI_TO_SA_DELETE, 0x8027, nix_spi_to_sa_delete, nix_spi_to_sa_delete_req, \
> > + msg_rsp) \
>
> Please keep line length to 80 columns or less in Networking code,
> unless it reduces readability.
>
> In this case perhaps:
>
> M(NIX_SPI_TO_SA_DELETE, 0x8027, nix_spi_to_sa_delete, \
> nix_spi_to_sa_delete_req, \
> msg_rsp) \
>
> Likewise throughout this patch (set).
> checkpatch.pl --max-line-length=80 is your friend.
ACK. I will adhere to the 80 columns in the next version.
Regards,
Tanmay
>
> > M(NIX_MCAST_GRP_CREATE, 0x802b, nix_mcast_grp_create, nix_mcast_grp_create_req, \
> > nix_mcast_grp_create_rsp) \
> > M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req, \
>
> ...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 09/15] octeontx2-pf: ipsec: Allocate Ingress SA table
2025-05-07 12:56 ` Simon Horman
@ 2025-05-22 9:21 ` Tanmay Jagdale
0 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-22 9:21 UTC (permalink / raw)
To: Simon Horman
Cc: herbert, davem, sgoutham, lcherian, gakula, jerinj, hkelam,
sbhatta, andrew+netdev, edumazet, kuba, pabeni, bbhushan2,
bhelgaas, pstanner, gregkh, peterz, linux, linux-crypto,
linux-kernel, netdev, gcherian
Hi Simon,
On 2025-05-07 at 18:26:25, Simon Horman (horms@kernel.org) wrote:
> On Fri, May 02, 2025 at 06:49:50PM +0530, Tanmay Jagdale wrote:
> > Every NIX LF has the facility to maintain a contiguous SA table that
> > is used by NIX RX to find the exact SA context pointer associated with
> > a particular flow. Allocate a 128-entry SA table where each entry is of
> > 2048 bytes which is enough to hold the complete inbound SA context.
> >
> > Add the structure definitions for SA context (cn10k_rx_sa_s) and
> > SA bookkeeping information (ctx_inb_ctx_info).
> >
> > Also, initialize the inb_sw_ctx_list to track all the SA's and their
> > associated NPC rules and hash table related data.
> >
> > Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
>
> ...
>
> > @@ -146,6 +169,76 @@ struct cn10k_tx_sa_s {
> > u64 hw_ctx[6]; /* W31 - W36 */
> > };
> >
> > +struct cn10k_rx_sa_s {
> > + u64 inb_ar_win_sz : 3; /* W0 */
> > + u64 hard_life_dec : 1;
> > + u64 soft_life_dec : 1;
> > + u64 count_glb_octets : 1;
> > + u64 count_glb_pkts : 1;
> > + u64 count_mib_bytes : 1;
> > + u64 count_mib_pkts : 1;
> > + u64 hw_ctx_off : 7;
> > + u64 ctx_id : 16;
> > + u64 orig_pkt_fabs : 1;
> > + u64 orig_pkt_free : 1;
> > + u64 pkind : 6;
> > + u64 rsvd_w0_40 : 1;
> > + u64 eth_ovrwr : 1;
> > + u64 pkt_output : 2;
> > + u64 pkt_format : 1;
> > + u64 defrag_opt : 2;
> > + u64 x2p_dst : 1;
> > + u64 ctx_push_size : 7;
> > + u64 rsvd_w0_55 : 1;
> > + u64 ctx_hdr_size : 2;
> > + u64 aop_valid : 1;
> > + u64 rsvd_w0_59 : 1;
> > + u64 ctx_size : 4;
> > +
> > + u64 rsvd_w1_31_0 : 32; /* W1 */
> > + u64 cookie : 32;
> > +
> > + u64 sa_valid : 1; /* W2 Control Word */
> > + u64 sa_dir : 1;
> > + u64 rsvd_w2_2_3 : 2;
> > + u64 ipsec_mode : 1;
> > + u64 ipsec_protocol : 1;
> > + u64 aes_key_len : 2;
> > + u64 enc_type : 3;
> > + u64 life_unit : 1;
> > + u64 auth_type : 4;
> > + u64 encap_type : 2;
> > + u64 et_ovrwr_ddr_en : 1;
> > + u64 esn_en : 1;
> > + u64 tport_l4_incr_csum : 1;
> > + u64 iphdr_verify : 2;
> > + u64 udp_ports_verify : 1;
> > + u64 l2_l3_hdr_on_error : 1;
> > + u64 rsvd_w25_31 : 7;
> > + u64 spi : 32;
>
> As I understand it, this driver is only intended to run on arm64 systems.
> While it is also possible, with COMPILE_TEST test, to compile the driver
> on for 64-bit systems.
Yes, this driver works only on Marvell CN10K SoC. I have COMPILE_TESTed
on x86 and ARM64 platforms.
>
> So, given the first point above, this may be moot. But the above
> assumes that the byte order of the host is the same as the device.
> Or perhaps more to the point, it has been written for a little-endian
> host and the device is expecting the data in that byte order.
>
> But u64 is supposed to represent host byte order. And, in my understanding
> of things, this is the kind of problem that FIELD_PREP and FIELD_GET are
> intended to avoid, when combined on endian-specific integer types (in this
> case __le64 seems appropriate).
>
> I do hesitate in bringing this up, as the above very likely works on
> all systems on which this code is intended to run. But I do so
> because it is not correct on all systems for which this code can be
> compiled. And thus seems somehow misleading.
Okay. Are you referring to a case where we compile on BE machine
and then run on LE platform?
With Regards,
Tanmay
>
> > +
> > + u64 w3; /* W3 */
> > +
> > + u8 cipher_key[32]; /* W4 - W7 */
> > + u32 rsvd_w8_0_31; /* W8 : IV */
> > + u32 iv_gcm_salt;
> > + u64 rsvd_w9; /* W9 */
> > + u64 rsvd_w10; /* W10 : UDP Encap */
> > + u32 dest_ipaddr; /* W11 - Tunnel mode: outer src and dest ipaddr */
> > + u32 src_ipaddr;
> > + u64 rsvd_w12_w30[19]; /* W12 - W30 */
> > +
> > + u64 ar_base; /* W31 */
> > + u64 ar_valid_mask; /* W32 */
> > + u64 hard_sa_life; /* W33 */
> > + u64 soft_sa_life; /* W34 */
> > + u64 mib_octs; /* W35 */
> > + u64 mib_pkts; /* W36 */
> > + u64 ar_winbits; /* W37 */
> > +
> > + u64 rsvd_w38_w100[63];
> > +};
> > +
> > /* CPT instruction parameter-1 */
> > #define CN10K_IPSEC_INST_PARAM1_DIS_L4_CSUM 0x1
> > #define CN10K_IPSEC_INST_PARAM1_DIS_L3_CSUM 0x2
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows
2025-05-07 13:46 ` Simon Horman
@ 2025-05-22 9:56 ` Tanmay Jagdale
0 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-22 9:56 UTC (permalink / raw)
To: Simon Horman
Cc: bbrezillon, herbert, davem, sgoutham, lcherian, gakula, jerinj,
hkelam, sbhatta, andrew+netdev, edumazet, kuba, pabeni, bbhushan2,
bhelgaas, pstanner, gregkh, peterz, linux, krzysztof.kozlowski,
giovanni.cabiddu, linux-crypto, linux-kernel, netdev, gcherian
Hi Simon,
On 2025-05-07 at 19:16:20, Simon Horman (horms@kernel.org) wrote:
> On Fri, May 02, 2025 at 06:49:51PM +0530, Tanmay Jagdale wrote:
> > A incoming encrypted IPsec packet in the RVU NIX hardware needs
> > to be classified for inline fastpath processing and then assinged
>
> nit: assigned
>
> checkpatch.pl --codespell is your friend
>
ACK.
> > a RQ and Aura pool before sending to CPT for decryption.
> >
> > Create a dedicated RQ, Aura and Pool with the following setup
> > specifically for IPsec flows:
> > - Set ipsech_en, ipsecd_drop_en in RQ context to enable hardware
> > fastpath processing for IPsec flows.
> > - Configure the dedicated Aura to raise an interrupt when
> > it's buffer count drops below a threshold value so that the
> > buffers can be replenished from the CPU.
> >
> > The RQ, Aura and Pool contexts are initialized only when esp-hw-offload
> > feature is enabled via ethtool.
> >
> > Also, move some of the RQ context macro definitions to otx2_common.h
> > so that they can be used in the IPsec driver as well.
> >
> > Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
>
> ...
>
> > +static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf)
> > +{
> > + struct otx2_hw *hw = &pfvf->hw;
> > + int stack_pages, pool_id;
> > + struct otx2_pool *pool;
> > + int err, ptr, num_ptrs;
> > + dma_addr_t bufptr;
> > +
> > + num_ptrs = 256;
> > + pool_id = pfvf->ipsec.inb_ipsec_pool;
> > + stack_pages = (num_ptrs + hw->stack_pg_ptrs - 1) / hw->stack_pg_ptrs;
> > +
> > + mutex_lock(&pfvf->mbox.lock);
> > +
> > + /* Initialize aura context */
> > + err = cn10k_ipsec_ingress_aura_init(pfvf, pool_id, pool_id, num_ptrs);
> > + if (err)
> > + goto fail;
> > +
> > + /* Initialize pool */
> > + err = otx2_pool_init(pfvf, pool_id, stack_pages, num_ptrs, pfvf->rbsize, AURA_NIX_RQ);
> > + if (err)
>
> This appears to leak pool->fc_addr.
Okay, let me look into this.
>
> > + goto fail;
> > +
> > + /* Flush accumulated messages */
> > + err = otx2_sync_mbox_msg(&pfvf->mbox);
> > + if (err)
> > + goto pool_fail;
> > +
> > + /* Allocate pointers and free them to aura/pool */
> > + pool = &pfvf->qset.pool[pool_id];
> > + for (ptr = 0; ptr < num_ptrs; ptr++) {
> > + err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr);
> > + if (err) {
> > + err = -ENOMEM;
> > + goto pool_fail;
> > + }
> > + pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr + OTX2_HEAD_ROOM);
> > + }
> > +
> > + /* Initialize RQ and map buffers from pool_id */
> > + err = cn10k_ipsec_ingress_rq_init(pfvf, pfvf->ipsec.inb_ipsec_rq, pool_id);
> > + if (err)
> > + goto pool_fail;
> > +
> > + mutex_unlock(&pfvf->mbox.lock);
> > + return 0;
> > +
> > +pool_fail:
> > + mutex_unlock(&pfvf->mbox.lock);
> > + qmem_free(pfvf->dev, pool->stack);
> > + qmem_free(pfvf->dev, pool->fc_addr);
> > + page_pool_destroy(pool->page_pool);
> > + devm_kfree(pfvf->dev, pool->xdp);
>
> It is not clear to me why devm_kfree() is being called here.
> I didn't look deeply. But I think it is likely that
> either pool->xdp should be freed when the device is released.
> Or pool->xdp should not be allocated (and freed) using devm functions.
Good catch. We aren't used pool->xdp for inbound IPsec yet, so I'll
drop this.
>
> > + pool->xsk_pool = NULL;
>
> The clean-up of pool->stack, pool->page_pool), pool->xdp, and
> pool->xsk_pool, all seem to unwind initialisation performed by
> otx2_pool_init(). And appear to be duplicated elsewhere.
> I would suggest adding a helper for that.
Okay I'll look into reusing common code.
>
> > +fail:
> > + otx2_mbox_reset(&pfvf->mbox.mbox, 0);
> > + return err;
> > +}
>
> ...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 13/15] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries
2025-05-07 15:58 ` Simon Horman
@ 2025-05-22 10:01 ` Tanmay Jagdale
0 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-22 10:01 UTC (permalink / raw)
To: Simon Horman
Cc: herbert, davem, sgoutham, lcherian, gakula, jerinj, hkelam,
sbhatta, andrew+netdev, edumazet, kuba, pabeni, bbhushan2,
bhelgaas, pstanner, gregkh, peterz, linux, linux-crypto,
linux-kernel, netdev, rkannoth, sumang, gcherian
Hi Simon,
On 2025-05-07 at 21:28:14, Simon Horman (horms@kernel.org) wrote:
> On Fri, May 02, 2025 at 06:49:54PM +0530, Tanmay Jagdale wrote:
> > NPC rule for IPsec flows
> > ------------------------
> > Incoming IPsec packets are first classified for hardware fastpath
> > processing in the NPC block. Hence, allocate an MCAM entry in NPC
> > using the MCAM_ALLOC_ENTRY mailbox to add a rule for IPsec flow
> > classification.
> >
> > Then, install an NPC rule at this entry for packet classification
> > based on ESP header and SPI value with match action as UCAST_IPSEC.
> > Also, these packets need to be directed to the dedicated receive
> > queue so provide the RQ index as part of NPC_INSTALL_FLOW mailbox.
> > Add a function to delete NPC rule as well.
> >
> > SPI-to-SA match table
> > ---------------------
> > NIX RX maintains a common hash table for matching the SPI value from
> > in ESP packet to the SA index associated with it. This table has 2K entries
> > with 4 ways. When a packet is received with action as UCAST_IPSEC, NIXRX
> > uses the SPI from the packet header to perform lookup in the SPI-to-SA
> > hash table. This lookup, if successful, returns an SA index that is used
> > by NIXRX to calculate the exact SA context address and programs it in
> > the CPT_INST_S before submitting the packet to CPT for decryption.
> >
> > Add functions to install the delete an entry from this table via the
> > NIX_SPI_TO_SA_ADD and NIX_SPI_TO_SA_DELETE mailbox calls respectively.
> >
> > When the RQs are changed at runtime via ethtool, RVU PF driver frees all
> > the resources and goes through reinitialization with the new set of receive
> > queues. As part of this flow, the UCAST_IPSEC NPC rules that were installed
> > by the RVU PF/VF driver have to be reconfigured with the new RQ index.
> >
> > So, delete the NPC rules when the interface is stopped via otx2_stop().
> > When otx2_open() is called, re-install the NPC flow and re-initialize the
> > SPI-to-SA table for every SA context that was previously installed.
> >
> > Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
> > ---
> > .../marvell/octeontx2/nic/cn10k_ipsec.c | 201 ++++++++++++++++++
> > .../marvell/octeontx2/nic/cn10k_ipsec.h | 7 +
> > .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 9 +
> > 3 files changed, 217 insertions(+)
> >
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
>
> ...
>
> > +static int cn10k_inb_install_flow(struct otx2_nic *pfvf, struct xfrm_state *x,
> > + struct cn10k_inb_sw_ctx_info *inb_ctx_info)
> > +{
> > + struct npc_install_flow_req *req;
> > + int err;
> > +
> > + mutex_lock(&pfvf->mbox.lock);
> > +
> > + req = otx2_mbox_alloc_msg_npc_install_flow(&pfvf->mbox);
> > + if (!req) {
> > + err = -ENOMEM;
> > + goto out;
> > + }
> > +
> > + req->entry = inb_ctx_info->npc_mcam_entry;
> > + req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI) | BIT(NPC_DMAC);
> > + req->intf = NIX_INTF_RX;
> > + req->index = pfvf->ipsec.inb_ipsec_rq;
> > + req->match_id = 0xfeed;
> > + req->channel = pfvf->hw.rx_chan_base;
> > + req->op = NIX_RX_ACTIONOP_UCAST_IPSEC;
> > + req->set_cntr = 1;
> > + req->packet.spi = x->id.spi;
> > + req->mask.spi = 0xffffffff;
>
> I realise that the value is isomorphic, but I would use the following
> so that the rvalue has an endian annotation that matches the lvalue.
>
> req->mask.spi = cpu_to_be32(0xffffffff);
>
> Flagged by Sparse.
ACK.
>
> > +
> > + /* Send message to AF */
> > + err = otx2_sync_mbox_msg(&pfvf->mbox);
> > +out:
> > + mutex_unlock(&pfvf->mbox.lock);
> > + return err;
> > +}
>
> ...
>
> > +static int cn10k_inb_delete_spi_to_sa_match_entry(struct otx2_nic *pfvf,
> > + struct cn10k_inb_sw_ctx_info *inb_ctx_info)
>
> gcc-14.2.0 (at least) complains that cn10k_inb_delete_spi_to_sa_match_entry
> is unused.
Oops.
>
> Likewise for cn10k_inb_delete_flow and cn10k_inb_delete_spi_to_sa_match_entry.
>
> I'm unsure of the best way to address this but it would be nice
> to avoid breaking build bisection for such a trivial reason.
>
> Some ideas:
> * Maybe it is possible to squash this and the last patch,
> or bring part of the last patch into this patch, or otherwise
> rearrange things to avoid this problem.
> * Add temporary __maybe_unusd annotations.
> (I'd consider this a last resort.)
Okay, I'll rearrange the code to avoid this issue.
Thanks,
Tanmay
>
> ...
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [net-next PATCH v1 14/15] octeontx2-pf: ipsec: Process CPT metapackets
2025-05-07 16:30 ` Simon Horman
@ 2025-05-23 4:08 ` Tanmay Jagdale
0 siblings, 0 replies; 43+ messages in thread
From: Tanmay Jagdale @ 2025-05-23 4:08 UTC (permalink / raw)
To: Simon Horman
Cc: herbert, davem, sgoutham, lcherian, gakula, jerinj, hkelam,
sbhatta, andrew+netdev, edumazet, kuba, pabeni, bbhushan2,
bhelgaas, pstanner, gregkh, peterz, linux, linux-crypto,
linux-kernel, netdev, rkannoth, sumang, gcherian
Hi Simon,
On 2025-05-07 at 22:00:50, Simon Horman (horms@kernel.org) wrote:
> On Fri, May 02, 2025 at 06:49:55PM +0530, Tanmay Jagdale wrote:
> > CPT hardware forwards decrypted IPsec packets to NIX via the X2P bus
> > as metapackets which are of 256 bytes in length. Each metapacket
> > contains CPT_PARSE_HDR_S and initial bytes of the decrypted packet
> > that helps NIX RX in classifying and submitting to CPU. Additionally,
> > CPT also sets BIT(11) of the channel number to indicate that it's a
> > 2nd pass packet from CPT.
> >
> > Since the metapackets are not complete packets, they don't have to go
> > through L3/L4 layer length and checksum verification so these are
> > disabled via the NIX_LF_INLINE_RQ_CFG mailbox during IPsec initialization.
> >
> > The CPT_PARSE_HDR_S contains a WQE pointer to the complete decrypted
> > packet. Add code in the rx NAPI handler to parse the header and extract
> > WQE pointer. Later, use this WQE pointer to construct the skb, set the
> > XFRM packet mode flags to indicate successful decryption before submitting
> > it to the network stack.
> >
> > Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
> > ---
> > .../marvell/octeontx2/nic/cn10k_ipsec.c | 61 +++++++++++++++++++
> > .../marvell/octeontx2/nic/cn10k_ipsec.h | 47 ++++++++++++++
> > .../marvell/octeontx2/nic/otx2_struct.h | 16 +++++
> > .../marvell/octeontx2/nic/otx2_txrx.c | 25 +++++++-
> > 4 files changed, 147 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> > index 91c8f13b6e48..bebf5cdedee4 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
> > @@ -346,6 +346,67 @@ static int cn10k_outb_cpt_init(struct net_device *netdev)
> > return ret;
> > }
> >
> > +struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
> > + struct nix_rx_sg_s *sg,
> > + struct sk_buff *skb,
> > + int qidx)
> > +{
> > + struct nix_wqe_rx_s *wqe = NULL;
> > + u64 *seg_addr = &sg->seg_addr;
> > + struct cpt_parse_hdr_s *cptp;
> > + struct xfrm_offload *xo;
> > + struct otx2_pool *pool;
> > + struct xfrm_state *xs;
> > + struct sec_path *sp;
> > + u64 *va_ptr;
> > + void *va;
> > + int i;
> > +
> > + /* CPT_PARSE_HDR_S is present in the beginning of the buffer */
> > + va = phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain, *seg_addr));
> > +
> > + /* Convert CPT_PARSE_HDR_S from BE to LE */
> > + va_ptr = (u64 *)va;
>
> phys_to_virt returns a void *. And there is no need to explicitly cast
> another pointer type to or from a void *.
>
> So probably this can simply be:
>
> va_ptr = phys_to_virt(...);
ACK.
>
>
> > + for (i = 0; i < (sizeof(struct cpt_parse_hdr_s) / sizeof(u64)); i++)
> > + va_ptr[i] = be64_to_cpu(va_ptr[i]);
>
> Please don't use the same variable to hold both big endian and
> host byte order values. Because tooling can no longer provide
> information about endian mismatches.
>
> Flagged by Sparse.
>
> Also, isn't only the long word that exactly comprises the
> wqe_ptr field of cpt_parse_hdr_s used? If so, perhaps
> only that portion needs to be converted to host byte order?
Yes I don't need the complete cpt_parse_hdr_s to be converted,
just wqe_ptr and cookie. So I'll rework this logic.
>
> I'd explore describing the members of struct cpt_parse_hdr_s as __be64.
> And use FIELD_PREP and FIELD_GET to deal with parts of each __be64.
> I think that would lead to a simpler implementation.
ACK. I'll explore defining structure in a big endian format
and using the FIELD_XX macros.
>
> > +
> > + cptp = (struct cpt_parse_hdr_s *)va;
> > +
> > + /* Convert the wqe_ptr from CPT_PARSE_HDR_S to a CPU usable pointer */
> > + wqe = (struct nix_wqe_rx_s *)phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain,
> > + cptp->wqe_ptr));
>
> There is probably no need to cast from void * here either.
>
> wqe = phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain,
> cptp->wqe_ptr));
>
ACK.
> > +
> > + /* Get the XFRM state pointer stored in SA context */
> > + va_ptr = pfvf->ipsec.inb_sa->base +
> > + (cptp->cookie * pfvf->ipsec.sa_tbl_entry_sz) + 1024;
> > + xs = (struct xfrm_state *)*va_ptr;
>
> Maybe this can be more succinctly written as follows?
>
> xs = pfvf->ipsec.inb_sa->base +
> (cptp->cookie * pfvf->ipsec.sa_tbl_entry_sz) + 1024;
>
ACK.
> > +
> > + /* Set XFRM offload status and flags for successful decryption */
> > + sp = secpath_set(skb);
> > + if (!sp) {
> > + netdev_err(pfvf->netdev, "Failed to secpath_set\n");
> > + wqe = NULL;
> > + goto err_out;
> > + }
> > +
> > + rcu_read_lock();
> > + xfrm_state_hold(xs);
> > + rcu_read_unlock();
> > +
> > + sp->xvec[sp->len++] = xs;
> > + sp->olen++;
> > +
> > + xo = xfrm_offload(skb);
> > + xo->flags = CRYPTO_DONE;
> > + xo->status = CRYPTO_SUCCESS;
> > +
> > +err_out:
> > + /* Free the metapacket memory here since it's not needed anymore */
> > + pool = &pfvf->qset.pool[qidx];
> > + otx2_free_bufs(pfvf, pool, *seg_addr - OTX2_HEAD_ROOM, pfvf->rbsize);
> > + return wqe;
> > +}
> > +
> > static int cn10k_inb_alloc_mcam_entry(struct otx2_nic *pfvf,
> > struct cn10k_inb_sw_ctx_info *inb_ctx_info)
> > {
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
> > index aad5ebea64ef..68046e377486 100644
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
> > @@ -8,6 +8,7 @@
> > #define CN10K_IPSEC_H
> >
> > #include <linux/types.h>
> > +#include "otx2_struct.h"
> >
> > DECLARE_STATIC_KEY_FALSE(cn10k_ipsec_sa_enabled);
> >
> > @@ -302,6 +303,41 @@ struct cpt_sg_s {
> > u64 rsvd_63_50 : 14;
> > };
> >
> > +/* CPT Parse Header Structure for Inbound packets */
> > +struct cpt_parse_hdr_s {
> > + /* Word 0 */
> > + u64 cookie : 32;
> > + u64 match_id : 16;
> > + u64 err_sum : 1;
> > + u64 reas_sts : 4;
> > + u64 reserved_53 : 1;
> > + u64 et_owr : 1;
> > + u64 pkt_fmt : 1;
> > + u64 pad_len : 3;
> > + u64 num_frags : 3;
> > + u64 pkt_out : 2;
> > +
> > + /* Word 1 */
> > + u64 wqe_ptr;
> > +
> > + /* Word 2 */
> > + u64 frag_age : 16;
> > + u64 res_32_16 : 16;
> > + u64 pf_func : 16;
> > + u64 il3_off : 8;
> > + u64 fi_pad : 3;
> > + u64 fi_offset : 5;
> > +
> > + /* Word 3 */
> > + u64 hw_ccode : 8;
> > + u64 uc_ccode : 8;
> > + u64 res3_32_16 : 16;
> > + u64 spi : 32;
> > +
> > + /* Word 4 */
> > + u64 misc;
> > +};
> > +
> > /* CPT LF_INPROG Register */
> > #define CPT_LF_INPROG_INFLIGHT GENMASK_ULL(8, 0)
> > #define CPT_LF_INPROG_GRB_CNT GENMASK_ULL(39, 32)
>
> ...
>
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
>
> ...
>
> > @@ -355,8 +359,25 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
> > if (unlikely(!skb))
> > return;
> >
> > - start = (void *)sg;
> > - end = start + ((cqe->parse.desc_sizem1 + 1) * 16);
> > + if (parse->chan & 0x800) {
> > + orig_pkt_wqe = cn10k_ipsec_process_cpt_metapkt(pfvf, sg, skb, cq->cq_idx);
> > + if (!orig_pkt_wqe) {
> > + netdev_err(pfvf->netdev, "Invalid WQE in CPT metapacket\n");
> > + napi_free_frags(napi);
> > + cq->pool_ptrs++;
> > + return;
> > + }
> > + /* Switch *sg to the orig_pkt_wqe's *sg which has the actual
> > + * complete decrypted packet by CPT.
> > + */
> > + sg = &orig_pkt_wqe->sg;
> > + start = (void *)sg;
>
> I don't think this cast is necessary, start is a void *.
> Likewise below.
ACK.
>
> > + end = start + ((orig_pkt_wqe->parse.desc_sizem1 + 1) * 16);
> > + } else {
> > + start = (void *)sg;
> > + end = start + ((cqe->parse.desc_sizem1 + 1) * 16);
> > + }
>
> The (size + 1) * 16 calculation seems to be repeated.
> Perhaps a helper function is appropriate.
ACK.
Thanks,
Tanmay
>
> > +
> > while (start < end) {
> > sg = (struct nix_rx_sg_s *)start;
> > seg_addr = &sg->seg_addr;
> > --
> > 2.43.0
> >
> >
^ permalink raw reply [flat|nested] 43+ messages in thread
end of thread, other threads:[~2025-05-23 4:09 UTC | newest]
Thread overview: 43+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-02 13:19 [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 01/15] crypto: octeontx2: Share engine group info with AF driver Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 02/15] octeontx2-af: Configure crypto hardware for inline ipsec Tanmay Jagdale
2025-05-06 20:24 ` Simon Horman
2025-05-08 10:56 ` Bharat Bhushan
2025-05-02 13:19 ` [net-next PATCH v1 03/15] octeontx2-af: Setup Large Memory Transaction for crypto Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 04/15] octeontx2-af: Handle inbound inline ipsec config in AF Tanmay Jagdale
2025-05-07 9:19 ` Simon Horman
2025-05-07 9:28 ` Simon Horman
2025-05-13 6:08 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 05/15] crypto: octeontx2: Remove inbound inline ipsec config Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 06/15] octeontx2-af: Add support for CPT second pass Tanmay Jagdale
2025-05-07 7:58 ` kernel test robot
2025-05-07 12:36 ` Simon Horman
2025-05-13 5:18 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 07/15] octeontx2-af: Add support for SPI to SA index translation Tanmay Jagdale
2025-05-03 16:12 ` Kalesh Anakkur Purayil
2025-05-13 5:08 ` Tanmay Jagdale
2025-05-07 12:45 ` Simon Horman
2025-05-13 6:12 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 08/15] octeontx2-af: Add mbox to alloc/free BPIDs Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 09/15] octeontx2-pf: ipsec: Allocate Ingress SA table Tanmay Jagdale
2025-05-07 12:56 ` Simon Horman
2025-05-22 9:21 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 10/15] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows Tanmay Jagdale
2025-05-07 10:03 ` kernel test robot
2025-05-07 13:46 ` Simon Horman
2025-05-22 9:56 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 11/15] octeontx2-pf: ipsec: Handle NPA threshold interrupt Tanmay Jagdale
2025-05-07 12:04 ` kernel test robot
2025-05-07 14:20 ` Simon Horman
2025-05-02 13:19 ` [net-next PATCH v1 12/15] octeontx2-pf: ipsec: Initialize ingress IPsec Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 13/15] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries Tanmay Jagdale
2025-05-07 15:58 ` Simon Horman
2025-05-22 10:01 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 14/15] octeontx2-pf: ipsec: Process CPT metapackets Tanmay Jagdale
2025-05-07 16:30 ` Simon Horman
2025-05-23 4:08 ` Tanmay Jagdale
2025-05-02 13:19 ` [net-next PATCH v1 15/15] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows Tanmay Jagdale
2025-05-07 6:42 ` kernel test robot
2025-05-07 18:31 ` Simon Horman
2025-05-05 17:52 ` [net-next PATCH v1 00/15] Enable Inbound IPsec offload on Marvell CN10K SoC Leon Romanovsky
2025-05-13 5:11 ` Tanmay Jagdale
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).