linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC
@ 2025-08-19  2:14 Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 01/14] crypto: octeontx2: Share engine group info with AF driver Tanmay Jagdale
                   ` (13 more replies)
  0 siblings, 14 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:14 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

This patch series adds support for IPsec packet offload for the
Marvell CN10K SoC.

The packet flow
---------------
An encrypted IPSec packet goes through two passes in the RVU hardware
before reaching the CPU.
First Pass:
  The first pass involves identifying the packet as IPSec, assigning an RQ,
  allocating a buffer from the Aura pool and then send it to CPT for decryption.

Second Pass:
  After CPT decrypts the packet, it sends a metapacket to NIXRX via the X2P
  bus. The metapacket contains CPT_PARSE_HDR_S structure and some initial
  bytes of the decrypted packet which would help NIXRX in classification.
  CPT also sets BIT(11) of channel number to further help in identifcation.
  NIXRX allocates a new buffer for this packet and submits it to the CPU.

Once the decrypted metapacket packet is delivered to the CPU, get the WQE
pointer from CPT_PARSE_HDR_S in the packet buffer. This WQE points to the
complete decrypted packet. We create an skb using this, set the relevant
XFRM packet mode flags to indicate successful decryption, and submit it
to the network stack.

Bharat Bhushan (4):
  crypto: octeontx2: Share engine group info with AF driver
  octeontx2-af: Configure crypto hardware for inline ipsec
  octeontx2-af: Setup Large Memory Transaction for crypto
  octeontx2-af: Handle inbound inline ipsec config in AF

Geetha sowjanya (1):
  octeontx2-af: Add mbox to alloc/free BPIDs

Kiran Kumar K (1):
  octeontx2-af: Add support for SPI to SA index translation

Rakesh Kudurumalla (1):
  octeontx2-af: Add support for CPT second pass

Tanmay Jagdale (7):
  octeontx2-pf: ipsec: Allocate Ingress SA table
  octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows
  octeontx2-pf: ipsec: Handle NPA threshold interrupt
  octeontx2-pf: ipsec: Initialize ingress IPsec
  octeontx2-pf: ipsec: Process CPT metapackets
  octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries
  octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows

 .../marvell/octeontx2/otx2_cpt_common.h       |    8 -
 drivers/crypto/marvell/octeontx2/otx2_cptpf.h |   10 -
 .../marvell/octeontx2/otx2_cptpf_main.c       |   50 +-
 .../marvell/octeontx2/otx2_cptpf_mbox.c       |  282 +---
 .../marvell/octeontx2/otx2_cptpf_ucode.c      |  116 +-
 .../marvell/octeontx2/otx2_cptpf_ucode.h      |    3 +-
 .../ethernet/marvell/octeontx2/af/Makefile    |    2 +-
 .../ethernet/marvell/octeontx2/af/common.h    |    1 +
 .../net/ethernet/marvell/octeontx2/af/mbox.c  |    3 -
 .../net/ethernet/marvell/octeontx2/af/mbox.h  |  119 +-
 .../net/ethernet/marvell/octeontx2/af/rvu.c   |    8 +-
 .../net/ethernet/marvell/octeontx2/af/rvu.h   |   71 +
 .../ethernet/marvell/octeontx2/af/rvu_cn10k.c |   11 +
 .../ethernet/marvell/octeontx2/af/rvu_cpt.c   |  707 +++++++++-
 .../ethernet/marvell/octeontx2/af/rvu_cpt.h   |   71 +
 .../ethernet/marvell/octeontx2/af/rvu_nix.c   |  235 +++-
 .../marvell/octeontx2/af/rvu_nix_spi.c        |  211 +++
 .../ethernet/marvell/octeontx2/af/rvu_reg.h   |   40 +
 .../marvell/octeontx2/af/rvu_struct.h         |    4 +-
 .../marvell/octeontx2/nic/cn10k_ipsec.c       | 1208 ++++++++++++++++-
 .../marvell/octeontx2/nic/cn10k_ipsec.h       |  136 ++
 .../marvell/octeontx2/nic/otx2_common.c       |   23 +-
 .../marvell/octeontx2/nic/otx2_common.h       |   16 +
 .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |   17 +
 .../ethernet/marvell/octeontx2/nic/otx2_reg.h |    5 +
 .../marvell/octeontx2/nic/otx2_struct.h       |   16 +
 .../marvell/octeontx2/nic/otx2_txrx.c         |   27 +-
 .../ethernet/marvell/octeontx2/nic/otx2_vf.c  |    4 +
 28 files changed, 2940 insertions(+), 464 deletions(-)
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c

-- 
2.43.0


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 01/14] crypto: octeontx2: Share engine group info with AF driver
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
@ 2025-08-19  2:14 ` Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 02/14] octeontx2-af: Configure crypto hardware for inline ipsec Tanmay Jagdale
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:14 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

From: Bharat Bhushan <bbhushan2@marvell.com>

CPT crypto hardware have multiple engines of different type
and these engines of a give type are attached to one of the
engine group. Software will submit ecnap/decap work to these
engine group. Engine group details are available with CPT
crypto driver. This is shared with AF driver using mailbox
message to enable use cases like inline-ipsec etc.

Also, no need to try to delete engine groups if engine group
initialization fails. Engine groups will never be created
before engine group initialization.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in v4:
- None

Changes in v3:
- None

Changes in v2:
- None 

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-2-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-2-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-2-tanmay@marvell.com/

 .../marvell/octeontx2/otx2_cpt_common.h       |   7 --
 .../marvell/octeontx2/otx2_cptpf_main.c       |   4 +-
 .../marvell/octeontx2/otx2_cptpf_mbox.c       |   1 +
 .../marvell/octeontx2/otx2_cptpf_ucode.c      | 116 ++++++++++++++++--
 .../marvell/octeontx2/otx2_cptpf_ucode.h      |   3 +-
 .../net/ethernet/marvell/octeontx2/af/mbox.h  |  16 +++
 .../net/ethernet/marvell/octeontx2/af/rvu.h   |  10 ++
 .../ethernet/marvell/octeontx2/af/rvu_cpt.c   |  21 ++++
 8 files changed, 160 insertions(+), 18 deletions(-)

diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
index 062def303dce..89d4dfbb1e8e 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
+++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
@@ -31,13 +31,6 @@
 
 #define BAD_OTX2_CPT_ENG_TYPE OTX2_CPT_MAX_ENG_TYPES
 
-enum otx2_cpt_eng_type {
-	OTX2_CPT_AE_TYPES = 1,
-	OTX2_CPT_SE_TYPES = 2,
-	OTX2_CPT_IE_TYPES = 3,
-	OTX2_CPT_MAX_ENG_TYPES,
-};
-
 /* Take mbox id from end of CPT mbox range in AF (range 0xA00 - 0xBFF) */
 #define MBOX_MSG_RX_INLINE_IPSEC_LF_CFG 0xBFE
 #define MBOX_MSG_GET_ENG_GRP_NUM        0xBFF
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
index 1c5c262af48d..1bceabe5f0e2 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
@@ -819,7 +819,7 @@ static int otx2_cptpf_probe(struct pci_dev *pdev,
 sysfs_grp_del:
 	sysfs_remove_group(&dev->kobj, &cptpf_sysfs_group);
 cleanup_eng_grps:
-	otx2_cpt_cleanup_eng_grps(pdev, &cptpf->eng_grps);
+	otx2_cpt_cleanup_eng_grps(cptpf);
 free_lmtst:
 	cn10k_cpt_lmtst_free(pdev, &cptpf->lfs);
 unregister_intr:
@@ -851,7 +851,7 @@ static void otx2_cptpf_remove(struct pci_dev *pdev)
 	/* Delete sysfs entry created for kernel VF limits */
 	sysfs_remove_group(&pdev->dev.kobj, &cptpf_sysfs_group);
 	/* Cleanup engine groups */
-	otx2_cpt_cleanup_eng_grps(pdev, &cptpf->eng_grps);
+	otx2_cpt_cleanup_eng_grps(cptpf);
 	/* Disable AF-PF mailbox interrupt */
 	cptpf_disable_afpf_mbox_intr(cptpf);
 	/* Destroy AF-PF mbox */
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
index b4b2d3d1cbc2..3ff3a49bd82b 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
@@ -502,6 +502,7 @@ static void process_afpf_mbox_msg(struct otx2_cptpf_dev *cptpf,
 	case MBOX_MSG_NIX_INLINE_IPSEC_CFG:
 	case MBOX_MSG_CPT_LF_RESET:
 	case MBOX_MSG_LMTST_TBL_SETUP:
+	case MBOX_MSG_CPT_SET_ENG_GRP_NUM:
 		break;
 
 	default:
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
index cc47e361089a..c3ed08882c75 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
@@ -1144,6 +1144,68 @@ int otx2_cpt_get_eng_grp(struct otx2_cpt_eng_grps *eng_grps, int eng_type)
 	return eng_grp_num;
 }
 
+static int otx2_cpt_get_eng_grp_type(struct otx2_cpt_eng_grps *eng_grps,
+				     int grp_num)
+{
+	struct otx2_cpt_eng_grp_info *grp;
+
+	grp = &eng_grps->grp[grp_num];
+	if (!grp->is_enabled)
+		return 0;
+
+	if (eng_grp_has_eng_type(grp, OTX2_CPT_SE_TYPES) &&
+	    !eng_grp_has_eng_type(grp, OTX2_CPT_IE_TYPES))
+		return OTX2_CPT_SE_TYPES;
+
+	if (eng_grp_has_eng_type(grp, OTX2_CPT_IE_TYPES))
+		return OTX2_CPT_IE_TYPES;
+
+	if (eng_grp_has_eng_type(grp, OTX2_CPT_AE_TYPES))
+		return OTX2_CPT_AE_TYPES;
+	return 0;
+}
+
+static int otx2_cpt_set_eng_grp_num(struct otx2_cptpf_dev *cptpf,
+				    enum otx2_cpt_eng_type eng_type, bool set)
+{
+	struct cpt_set_egrp_num *req;
+	struct pci_dev *pdev = cptpf->pdev;
+
+	if (!eng_type || eng_type >= OTX2_CPT_MAX_ENG_TYPES)
+		return -EINVAL;
+
+	req = (struct cpt_set_egrp_num *)
+	      otx2_mbox_alloc_msg_rsp(&cptpf->afpf_mbox, 0,
+				      sizeof(*req), sizeof(struct msg_rsp));
+	if (!req) {
+		dev_err(&pdev->dev, "RVU MBOX failed to get message.\n");
+		return -EFAULT;
+	}
+
+	memset(req, 0, sizeof(*req));
+	req->hdr.id = MBOX_MSG_CPT_SET_ENG_GRP_NUM;
+	req->hdr.sig = OTX2_MBOX_REQ_SIG;
+	req->hdr.pcifunc = OTX2_CPT_RVU_PFFUNC(cptpf->pdev, cptpf->pf_id, 0);
+	req->set = set;
+	req->eng_type = eng_type;
+	req->eng_grp_num = otx2_cpt_get_eng_grp(&cptpf->eng_grps, eng_type);
+
+	return otx2_cpt_send_mbox_msg(&cptpf->afpf_mbox, pdev);
+}
+
+static int otx2_cpt_set_eng_grp_nums(struct otx2_cptpf_dev *cptpf, bool set)
+{
+	enum otx2_cpt_eng_type type;
+	int ret;
+
+	for (type = OTX2_CPT_AE_TYPES; type < OTX2_CPT_MAX_ENG_TYPES; type++) {
+		ret = otx2_cpt_set_eng_grp_num(cptpf, type, set);
+		if (ret)
+			return ret;
+	}
+	return 0;
+}
+
 int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf,
 			     struct otx2_cpt_eng_grps *eng_grps)
 {
@@ -1224,6 +1286,10 @@ int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf,
 	if (ret)
 		goto delete_eng_grp;
 
+	ret = otx2_cpt_set_eng_grp_nums(cptpf, 1);
+	if (ret)
+		goto unset_eng_grp;
+
 	eng_grps->is_grps_created = true;
 
 	cpt_ucode_release_fw(&fw_info);
@@ -1271,6 +1337,8 @@ int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf,
 	mutex_unlock(&eng_grps->lock);
 	return 0;
 
+unset_eng_grp:
+	otx2_cpt_set_eng_grp_nums(cptpf, 0);
 delete_eng_grp:
 	delete_engine_grps(pdev, eng_grps);
 release_fw:
@@ -1350,9 +1418,10 @@ int otx2_cpt_disable_all_cores(struct otx2_cptpf_dev *cptpf)
 	return cptx_disable_all_cores(cptpf, total_cores, BLKADDR_CPT0);
 }
 
-void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev,
-			       struct otx2_cpt_eng_grps *eng_grps)
+void otx2_cpt_cleanup_eng_grps(struct otx2_cptpf_dev *cptpf)
 {
+	struct otx2_cpt_eng_grps *eng_grps = &cptpf->eng_grps;
+	struct pci_dev *pdev = cptpf->pdev;
 	struct otx2_cpt_eng_grp_info *grp;
 	int i, j;
 
@@ -1366,6 +1435,8 @@ void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev,
 			grp->engs[j].bmap = NULL;
 		}
 	}
+
+	otx2_cpt_set_eng_grp_nums(cptpf, 0);
 	mutex_unlock(&eng_grps->lock);
 }
 
@@ -1388,8 +1459,7 @@ int otx2_cpt_init_eng_grps(struct pci_dev *pdev,
 		dev_err(&pdev->dev,
 			"Number of engines %d > than max supported %d\n",
 			eng_grps->engs_num, OTX2_CPT_MAX_ENGINES);
-		ret = -EINVAL;
-		goto cleanup_eng_grps;
+		return -EINVAL;
 	}
 
 	for (i = 0; i < OTX2_CPT_MAX_ENGINE_GROUPS; i++) {
@@ -1403,14 +1473,20 @@ int otx2_cpt_init_eng_grps(struct pci_dev *pdev,
 					sizeof(long), GFP_KERNEL);
 			if (!grp->engs[j].bmap) {
 				ret = -ENOMEM;
-				goto cleanup_eng_grps;
+				goto release_bmap;
 			}
 		}
 	}
 	return 0;
 
-cleanup_eng_grps:
-	otx2_cpt_cleanup_eng_grps(pdev, eng_grps);
+release_bmap:
+	for (i = 0; i < OTX2_CPT_MAX_ENGINE_GROUPS; i++) {
+		grp = &eng_grps->grp[i];
+		for (j = 0; j < OTX2_CPT_MAX_ETYPES_PER_GRP; j++) {
+			kfree(grp->engs[j].bmap);
+			grp->engs[j].bmap = NULL;
+		}
+	}
 	return ret;
 }
 
@@ -1609,6 +1685,7 @@ int otx2_cpt_dl_custom_egrp_create(struct otx2_cptpf_dev *cptpf,
 	bool has_se, has_ie, has_ae;
 	struct fw_info_t fw_info;
 	int ucode_idx = 0;
+	int egrp;
 
 	if (!eng_grps->is_grps_created) {
 		dev_err(dev, "Not allowed before creating the default groups\n");
@@ -1746,7 +1823,21 @@ int otx2_cpt_dl_custom_egrp_create(struct otx2_cptpf_dev *cptpf,
 	}
 	ret = create_engine_group(dev, eng_grps, engs, grp_idx,
 				  (void **)uc_info, 1);
+	if (ret)
+		goto release_fw;
 
+	ret = otx2_cpt_set_eng_grp_num(cptpf, engs[0].type, 1);
+	if (ret) {
+		egrp = otx2_cpt_get_eng_grp(eng_grps, engs[0].type);
+		ret = delete_engine_group(dev, &eng_grps->grp[egrp]);
+	}
+	if (ucode_idx > 1) {
+		ret = otx2_cpt_set_eng_grp_num(cptpf, engs[1].type, 1);
+		if (ret) {
+			egrp = otx2_cpt_get_eng_grp(eng_grps, engs[1].type);
+			ret = delete_engine_group(dev, &eng_grps->grp[egrp]);
+		}
+	}
 release_fw:
 	cpt_ucode_release_fw(&fw_info);
 err_unlock:
@@ -1764,6 +1855,7 @@ int otx2_cpt_dl_custom_egrp_delete(struct otx2_cptpf_dev *cptpf,
 	struct device *dev = &cptpf->pdev->dev;
 	char *tmp, *err_msg;
 	int egrp;
+	int type;
 	int ret;
 
 	err_msg = "Invalid input string format(ex: egrp:0)";
@@ -1785,6 +1877,16 @@ int otx2_cpt_dl_custom_egrp_delete(struct otx2_cptpf_dev *cptpf,
 		return -EINVAL;
 	}
 	mutex_lock(&eng_grps->lock);
+	type = otx2_cpt_get_eng_grp_type(eng_grps, egrp);
+	if (!type) {
+		mutex_unlock(&eng_grps->lock);
+		return -EINVAL;
+	}
+	ret = otx2_cpt_set_eng_grp_num(cptpf, type, 0);
+	if (ret) {
+		mutex_unlock(&eng_grps->lock);
+		return -EINVAL;
+	}
 	ret = delete_engine_group(dev, &eng_grps->grp[egrp]);
 	mutex_unlock(&eng_grps->lock);
 
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h
index 7e6a6a4ec37c..85ead693e359 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h
@@ -155,8 +155,7 @@ struct otx2_cpt_eng_grps {
 struct otx2_cptpf_dev;
 int otx2_cpt_init_eng_grps(struct pci_dev *pdev,
 			   struct otx2_cpt_eng_grps *eng_grps);
-void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev,
-			       struct otx2_cpt_eng_grps *eng_grps);
+void otx2_cpt_cleanup_eng_grps(struct otx2_cptpf_dev *cptpf);
 int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf,
 			     struct otx2_cpt_eng_grps *eng_grps);
 int otx2_cpt_disable_all_cores(struct otx2_cptpf_dev *cptpf);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 933073cd2280..cfa0c1df5536 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -220,6 +220,8 @@ M(CPT_CTX_CACHE_SYNC,   0xA07, cpt_ctx_cache_sync, msg_req, msg_rsp)    \
 M(CPT_LF_RESET,         0xA08, cpt_lf_reset, cpt_lf_rst_req, msg_rsp)	\
 M(CPT_FLT_ENG_INFO,     0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req,	\
 			       cpt_flt_eng_info_rsp)			\
+M(CPT_SET_ENG_GRP_NUM,  0xA0A, cpt_set_eng_grp_num, cpt_set_egrp_num,   \
+				msg_rsp)				\
 /* SDP mbox IDs (range 0x1000 - 0x11FF) */				\
 M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \
 M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \
@@ -1959,6 +1961,20 @@ struct cpt_flt_eng_info_rsp {
 	u64 rsvd;
 };
 
+enum otx2_cpt_eng_type {
+	OTX2_CPT_AE_TYPES = 1,
+	OTX2_CPT_SE_TYPES = 2,
+	OTX2_CPT_IE_TYPES = 3,
+	OTX2_CPT_MAX_ENG_TYPES,
+};
+
+struct cpt_set_egrp_num {
+	struct mbox_msghdr hdr;
+	bool set;
+	u8 eng_type;
+	u8 eng_grp_num;
+};
+
 struct sdp_node_info {
 	/* Node to which this PF belons to */
 	u8 node_id;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 7ee1fdeb5295..4537b7ecfd90 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -565,6 +565,15 @@ struct rep_evtq_ent {
 	struct rep_event event;
 };
 
+struct rvu_cpt_eng_grp {
+	u8 eng_type;
+	u8 grp_num;
+};
+
+struct rvu_cpt {
+	struct rvu_cpt_eng_grp eng_grp[OTX2_CPT_MAX_ENG_TYPES];
+};
+
 struct rvu {
 	void __iomem		*afreg_base;
 	void __iomem		*pfreg_base;
@@ -645,6 +654,7 @@ struct rvu {
 	spinlock_t		mcs_intrq_lock;
 	/* CPT interrupt lock */
 	spinlock_t		cpt_intr_lock;
+	struct rvu_cpt		rvu_cpt;
 
 	struct mutex		mbox_lock; /* Serialize mbox up and down msgs */
 	u16			rep_pcifunc;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
index f404117bf6c8..69c1796fba44 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
@@ -656,6 +656,27 @@ static int cpt_inline_ipsec_cfg_outbound(struct rvu *rvu, int blkaddr, u8 cptlf,
 	return 0;
 }
 
+int rvu_mbox_handler_cpt_set_eng_grp_num(struct rvu *rvu,
+					 struct cpt_set_egrp_num *req,
+					 struct msg_rsp *rsp)
+{
+	struct rvu_cpt *rvu_cpt = &rvu->rvu_cpt;
+	u8 eng_type = req->eng_type;
+
+	if (!eng_type || eng_type >= OTX2_CPT_MAX_ENG_TYPES)
+		return -EINVAL;
+
+	if (req->set) {
+		rvu_cpt->eng_grp[eng_type].grp_num = req->eng_grp_num;
+		rvu_cpt->eng_grp[eng_type].eng_type = eng_type;
+	} else {
+		rvu_cpt->eng_grp[eng_type].grp_num = 0;
+		rvu_cpt->eng_grp[eng_type].eng_type = 0;
+	}
+
+	return 0;
+}
+
 int rvu_mbox_handler_cpt_inline_ipsec_cfg(struct rvu *rvu,
 					  struct cpt_inline_ipsec_cfg_msg *req,
 					  struct msg_rsp *rsp)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 02/14] octeontx2-af: Configure crypto hardware for inline ipsec
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 01/14] crypto: octeontx2: Share engine group info with AF driver Tanmay Jagdale
@ 2025-08-19  2:14 ` Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 03/14] octeontx2-af: Setup Large Memory Transaction for crypto Tanmay Jagdale
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:14 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

From: Bharat Bhushan <bbhushan2@marvell.com>

Currently cpt_rx_inline_lf_cfg mailbox is handled by CPT PF
driver to configures inbound inline ipsec. Ideally inbound
inline ipsec configuration should be done by AF driver.

This patch adds support to allocate, attach and initialize
a cptlf from AF. It also configures NIX to send CPT instruction
if the packet needs inline ipsec processing and configures
CPT LF to handle inline inbound instruction received from NIX.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- None

Changes in V3:
- None

Changes in V2:
- Used GENMASK and FIELD_PREP macros
- Fixed compiler warning for unused function rvu_mbox_handler_cpt_rx_inline_lf_cfg
 
V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-3-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-3-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-3-tanmay@marvell.com/

 .../net/ethernet/marvell/octeontx2/af/mbox.h  |  16 +
 .../net/ethernet/marvell/octeontx2/af/rvu.c   |   4 +-
 .../net/ethernet/marvell/octeontx2/af/rvu.h   |  34 ++
 .../ethernet/marvell/octeontx2/af/rvu_cpt.c   | 564 ++++++++++++++++++
 .../ethernet/marvell/octeontx2/af/rvu_cpt.h   |  67 +++
 .../ethernet/marvell/octeontx2/af/rvu_nix.c   |   4 +-
 .../ethernet/marvell/octeontx2/af/rvu_reg.h   |  15 +
 7 files changed, 700 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index cfa0c1df5536..a2936a287b15 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -222,6 +222,8 @@ M(CPT_FLT_ENG_INFO,     0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req,	\
 			       cpt_flt_eng_info_rsp)			\
 M(CPT_SET_ENG_GRP_NUM,  0xA0A, cpt_set_eng_grp_num, cpt_set_egrp_num,   \
 				msg_rsp)				\
+M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, cpt_rx_inline_lf_cfg_msg, \
+			       msg_rsp)					\
 /* SDP mbox IDs (range 0x1000 - 0x11FF) */				\
 M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \
 M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \
@@ -1968,6 +1970,20 @@ enum otx2_cpt_eng_type {
 	OTX2_CPT_MAX_ENG_TYPES,
 };
 
+struct cpt_rx_inline_lf_cfg_msg {
+	struct mbox_msghdr hdr;
+	u16 sso_pf_func;
+	u16 param1;
+	u16 param2;
+	u16 opcode;
+	u32 credit;
+	u32 credit_th;
+	u16 bpid;
+	u32 reserved;
+	u8 ctx_ilen_valid : 1;
+	u8 ctx_ilen : 7;
+};
+
 struct cpt_set_egrp_num {
 	struct mbox_msghdr hdr;
 	bool set;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index c6bb3aaa8e0d..a4e9430acba9 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -1777,8 +1777,8 @@ int rvu_mbox_handler_attach_resources(struct rvu *rvu,
 	return err;
 }
 
-static u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
-			       int blkaddr, int lf)
+u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+			int blkaddr, int lf)
 {
 	u16 vec;
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 4537b7ecfd90..9f982c9f5953 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -570,8 +570,38 @@ struct rvu_cpt_eng_grp {
 	u8 grp_num;
 };
 
+struct rvu_cpt_rx_inline_lf_cfg {
+	u16 sso_pf_func;
+	u16 param1;
+	u16 param2;
+	u16 opcode;
+	u32 credit;
+	u32 credit_th;
+	u16 bpid;
+	u32 reserved;
+	u8 ctx_ilen_valid : 1;
+	u8 ctx_ilen : 7;
+};
+
+struct rvu_cpt_inst_queue {
+	u8 *vaddr;
+	u8 *real_vaddr;
+	dma_addr_t dma_addr;
+	dma_addr_t real_dma_addr;
+	u32 size;
+};
+
 struct rvu_cpt {
 	struct rvu_cpt_eng_grp eng_grp[OTX2_CPT_MAX_ENG_TYPES];
+
+	/* RX inline ipsec lock */
+	struct mutex lock;
+	bool rx_initialized;
+	u16 msix_offset;
+	u8 inline_ipsec_egrp;
+	struct rvu_cpt_inst_queue cpt0_iq;
+	struct rvu_cpt_inst_queue cpt1_iq;
+	struct rvu_cpt_rx_inline_lf_cfg rx_cfg;
 };
 
 struct rvu {
@@ -1129,6 +1159,8 @@ void rvu_program_channels(struct rvu *rvu);
 
 /* CN10K NIX */
 void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw);
+void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req,
+			  int blkaddr);
 
 /* CN10K RVU - LMT*/
 void rvu_reset_lmt_map_tbl(struct rvu *rvu, u16 pcifunc);
@@ -1160,6 +1192,8 @@ int rvu_mcs_init(struct rvu *rvu);
 int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc);
 void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena);
 void rvu_mcs_exit(struct rvu *rvu);
+u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
+			int blkaddr, int lf);
 
 /* Representor APIs */
 int rvu_rep_pf_init(struct rvu *rvu);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
index 69c1796fba44..e1b170919ba9 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
@@ -11,6 +11,7 @@
 #include "rvu_reg.h"
 #include "mbox.h"
 #include "rvu.h"
+#include "rvu_cpt.h"
 
 /* CPT PF device id */
 #define	PCI_DEVID_OTX2_CPT_PF	0xA0FD
@@ -968,6 +969,33 @@ int rvu_mbox_handler_cpt_ctx_cache_sync(struct rvu *rvu, struct msg_req *req,
 	return rvu_cpt_ctx_flush(rvu, req->hdr.pcifunc);
 }
 
+static int cpt_rx_ipsec_lf_reset(struct rvu *rvu, int blkaddr, int slot)
+{
+	struct rvu_block *block;
+	u16 pcifunc = 0;
+	int cptlf, ret;
+	u64 ctl, ctl2;
+
+	block = &rvu->hw->block[blkaddr];
+
+	cptlf = rvu_get_lf(rvu, block, pcifunc, slot);
+	if (cptlf < 0)
+		return CPT_AF_ERR_LF_INVALID;
+
+	ctl = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf));
+	ctl2 = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf));
+
+	ret = rvu_lf_reset(rvu, block, cptlf);
+	if (ret)
+		dev_err(rvu->dev, "Failed to reset blkaddr %d LF%d\n",
+			block->addr, cptlf);
+
+	rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), ctl);
+	rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf), ctl2);
+
+	return 0;
+}
+
 int rvu_mbox_handler_cpt_lf_reset(struct rvu *rvu, struct cpt_lf_rst_req *req,
 				  struct msg_rsp *rsp)
 {
@@ -1087,6 +1115,72 @@ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr)
 #define DQPTR      GENMASK_ULL(19, 0)
 #define NQPTR      GENMASK_ULL(51, 32)
 
+static void cpt_rx_ipsec_lf_enable_iqueue(struct rvu *rvu, int blkaddr,
+					  int slot)
+{
+	u64 val;
+
+	/* Set Execution Enable of instruction queue */
+	val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG);
+	val |= CPT_LF_INPROG_EXEC_ENABLE;
+	otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG, val);
+
+	/* Set iqueue's enqueuing */
+	val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL);
+	val |= CPT_LF_CTL_ENQ_ENA;
+	otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL, val);
+}
+
+static void cpt_rx_ipsec_lf_disable_iqueue(struct rvu *rvu, int blkaddr,
+					   int slot)
+{
+	int timeout = 1000000;
+	u64 inprog, inst_ptr;
+	u64 qsize, pending;
+	int i = 0;
+
+	/* Disable instructions enqueuing */
+	otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_CTL, 0x0);
+
+	inprog = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG);
+	inprog |= CPT_LF_INPROG_EXEC_ENABLE;
+	otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_INPROG, inprog);
+
+	qsize = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_SIZE)
+		 & 0x7FFF;
+	do {
+		inst_ptr = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot,
+					   CPT_LF_Q_INST_PTR);
+		pending = (FIELD_GET(XQ_XOR, inst_ptr) * qsize * 40) +
+			   FIELD_GET(NQPTR, inst_ptr) -
+			   FIELD_GET(DQPTR, inst_ptr);
+		udelay(1);
+		timeout--;
+	} while (pending != 0 && timeout != 0);
+
+	if (timeout == 0)
+		dev_warn(rvu->dev, "TIMEOUT: CPT poll on pending instructions\n");
+
+	timeout = 1000000;
+	/* Wait for CPT queue to become execution-quiescent */
+	do {
+		inprog = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot,
+					 CPT_LF_INPROG);
+		if ((FIELD_GET(INFLIGHT, inprog) == 0) &&
+		    (FIELD_GET(GRB_CNT, inprog) == 0)) {
+			i++;
+		} else {
+			i = 0;
+			timeout--;
+		}
+	} while ((timeout != 0) && (i < 10));
+
+	if (timeout == 0)
+		dev_warn(rvu->dev, "TIMEOUT: CPT poll on inflight count\n");
+	/* Wait for 2 us to flush all queue writes to memory */
+	udelay(2);
+}
+
 static void cpt_lf_disable_iqueue(struct rvu *rvu, int blkaddr, int slot)
 {
 	int timeout = 1000000;
@@ -1310,6 +1404,475 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc)
 	return 0;
 }
 
+static irqreturn_t rvu_cpt_rx_ipsec_misc_intr_handler(int irq, void *ptr)
+{
+	struct rvu_block *block = ptr;
+	struct rvu *rvu = block->rvu;
+	int blkaddr = block->addr;
+	struct device *dev = rvu->dev;
+	int slot = 0;
+	u64 val;
+
+	val = otx2_cpt_read64(rvu->pfreg_base, blkaddr, slot, CPT_LF_MISC_INT);
+
+	if (val & CPT_LF_MISC_INT_FAULT) {
+		dev_err(dev, "Memory error detected while executing CPT_INST_S, LF %d.\n",
+			slot);
+	} else if (val & CPT_LF_MISC_INT_HWERR) {
+		dev_err(dev, "HW error from an engine executing CPT_INST_S, LF %d.",
+			slot);
+	} else if (val & CPT_LF_MISC_INT_NWRP) {
+		dev_err(dev, "SMMU fault while writing CPT_RES_S to CPT_INST_S[RES_ADDR], LF %d.\n",
+			slot);
+	} else if (val & CPT_LF_MISC_INT_IRDE) {
+		dev_err(dev, "Memory error when accessing instruction memory queue CPT_LF_Q_BASE[ADDR].\n");
+	} else if (val & CPT_LF_MISC_INT_NQERR) {
+		dev_err(dev, "Error enqueuing an instruction received at CPT_LF_NQ.\n");
+	} else {
+		dev_err(dev, "Unhandled interrupt in CPT LF %d\n", slot);
+		return IRQ_NONE;
+	}
+
+	/* Acknowledge interrupts */
+	otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_MISC_INT,
+			 val & CPT_LF_MISC_INT_MASK);
+
+	return IRQ_HANDLED;
+}
+
+static int rvu_cpt_rx_inline_setup_irq(struct rvu *rvu, int blkaddr, int slot)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+	struct rvu_pfvf *pfvf;
+	u16 msix_offset;
+	int pcifunc = 0;
+	int ret, cptlf;
+
+	pfvf = rvu_get_pfvf(rvu, pcifunc);
+	if (!pfvf->msix.bmap)
+		return -ENODEV;
+
+	block = &hw->block[blkaddr];
+	cptlf = rvu_get_lf(rvu, block, pcifunc, slot);
+	if (cptlf < 0)
+		return CPT_AF_ERR_LF_INVALID;
+
+	msix_offset = rvu_get_msix_offset(rvu, pfvf, blkaddr, cptlf);
+	if (msix_offset == MSIX_VECTOR_INVALID)
+		return -ENODEV;
+
+	ret = rvu_cpt_do_register_interrupt(block, msix_offset,
+					    rvu_cpt_rx_ipsec_misc_intr_handler,
+					    "CPTLF RX IPSEC MISC");
+	if (ret)
+		return ret;
+
+	/* Enable All Misc interrupts */
+	otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot,
+			 CPT_LF_MISC_INT_ENA_W1S, CPT_LF_MISC_INT_MASK);
+
+	rvu->rvu_cpt.msix_offset = msix_offset;
+	return 0;
+}
+
+static void rvu_cpt_rx_inline_cleanup_irq(struct rvu *rvu, int blkaddr,
+					  int slot)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct rvu_block *block;
+
+	/* Disable All Misc interrupts */
+	otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot,
+			 CPT_LF_MISC_INT_ENA_W1C, CPT_LF_MISC_INT_MASK);
+
+	block = &hw->block[blkaddr];
+	free_irq(pci_irq_vector(rvu->pdev, rvu->rvu_cpt.msix_offset), block);
+}
+
+static int rvu_rx_attach_cptlf(struct rvu *rvu, int blkaddr)
+{
+	struct rsrc_attach attach;
+
+	memset(&attach, 0, sizeof(struct rsrc_attach));
+	attach.hdr.id = MBOX_MSG_ATTACH_RESOURCES;
+	attach.hdr.sig = OTX2_MBOX_REQ_SIG;
+	attach.hdr.ver = OTX2_MBOX_VERSION;
+	attach.hdr.pcifunc = 0;
+	attach.modify = 1;
+	attach.cptlfs = 1;
+	attach.cpt_blkaddr = blkaddr;
+
+	return rvu_mbox_handler_attach_resources(rvu, &attach, NULL);
+}
+
+static int rvu_rx_detach_cptlf(struct rvu *rvu)
+{
+	struct rsrc_detach detach;
+
+	memset(&detach, 0, sizeof(struct rsrc_detach));
+	detach.hdr.id = MBOX_MSG_ATTACH_RESOURCES;
+	detach.hdr.sig = OTX2_MBOX_REQ_SIG;
+	detach.hdr.ver = OTX2_MBOX_VERSION;
+	detach.hdr.pcifunc = 0;
+	detach.partial = 1;
+	detach.cptlfs = 1;
+
+	return rvu_mbox_handler_detach_resources(rvu, &detach, NULL);
+}
+
+/* Allocate memory for CPT outbound Instruction queue.
+ * Instruction queue memory format is:
+ *      -----------------------------
+ *     | Instruction Group memory    |
+ *     |  (CPT_LF_Q_SIZE[SIZE_DIV40] |
+ *     |   x 16 Bytes)               |
+ *     |                             |
+ *      ----------------------------- <-- CPT_LF_Q_BASE[ADDR]
+ *     | Flow Control (128 Bytes)    |
+ *     |                             |
+ *      -----------------------------
+ *     |  Instruction Memory         |
+ *     |  (CPT_LF_Q_SIZE[SIZE_DIV40] |
+ *     |   × 40 × 64 bytes)          |
+ *     |                             |
+ *      -----------------------------
+ */
+static int rvu_rx_cpt_iq_alloc(struct rvu *rvu, struct rvu_cpt_inst_queue *iq)
+{
+	iq->size = RVU_CPT_INST_QLEN_BYTES + RVU_CPT_Q_FC_LEN +
+		    RVU_CPT_INST_GRP_QLEN_BYTES + OTX2_ALIGN;
+
+	iq->real_vaddr = dma_alloc_coherent(rvu->dev, iq->size,
+					    &iq->real_dma_addr, GFP_KERNEL);
+	if (!iq->real_vaddr)
+		return -ENOMEM;
+
+	/* iq->vaddr/dma_addr points to Flow Control location */
+	iq->vaddr = iq->real_vaddr + RVU_CPT_INST_GRP_QLEN_BYTES;
+	iq->dma_addr = iq->real_dma_addr + RVU_CPT_INST_GRP_QLEN_BYTES;
+
+	/* Align pointers */
+	iq->vaddr = PTR_ALIGN(iq->vaddr, OTX2_ALIGN);
+	iq->dma_addr = PTR_ALIGN(iq->dma_addr, OTX2_ALIGN);
+	return 0;
+}
+
+static void rvu_rx_cpt_iq_free(struct rvu *rvu, int blkaddr)
+{
+	struct rvu_cpt_inst_queue *iq;
+
+	if (blkaddr == BLKADDR_CPT0)
+		iq = &rvu->rvu_cpt.cpt0_iq;
+	else
+		iq = &rvu->rvu_cpt.cpt1_iq;
+
+	if (!iq->real_vaddr)
+		dma_free_coherent(rvu->dev, iq->size, iq->real_vaddr,
+				  iq->real_dma_addr);
+
+	iq->real_vaddr = NULL;
+	iq->vaddr = NULL;
+}
+
+static int rvu_rx_cpt_set_grp_pri_ilen(struct rvu *rvu, int blkaddr, int cptlf)
+{
+	u64 reg_val;
+
+	reg_val = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf));
+	/* Set High priority */
+	reg_val |= CPT_AF_LFX_CTL_HIGH_PRI;
+	/* Set engine group */
+	reg_val |= FIELD_PREP(CPT_AF_LFX_CTL_EGRP, (1ULL << rvu->rvu_cpt.inline_ipsec_egrp));
+	/* Set ilen if valid */
+	if (rvu->rvu_cpt.rx_cfg.ctx_ilen_valid)
+		reg_val |= FIELD_PREP(CPT_AF_LFX_CTL_CTX_ILEN,
+				      rvu->rvu_cpt.rx_cfg.ctx_ilen);
+
+	rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), reg_val);
+	return 0;
+}
+
+static int rvu_cpt_rx_inline_cptlf_init(struct rvu *rvu, int blkaddr, int slot)
+{
+	struct rvu_cpt_inst_queue *iq;
+	struct rvu_block *block;
+	int pcifunc = 0;
+	int cptlf;
+	int err;
+	u64 val;
+
+	/* Attach cptlf with AF for inline inbound ipsec */
+	err = rvu_rx_attach_cptlf(rvu, blkaddr);
+	if (err)
+		return err;
+
+	block = &rvu->hw->block[blkaddr];
+	cptlf = rvu_get_lf(rvu, block, pcifunc, slot);
+	if (cptlf < 0) {
+		err = CPT_AF_ERR_LF_INVALID;
+		goto detach_cptlf;
+	}
+
+	if (blkaddr == BLKADDR_CPT0)
+		iq = &rvu->rvu_cpt.cpt0_iq;
+	else
+		iq = &rvu->rvu_cpt.cpt1_iq;
+
+	/* Allocate CPT instruction queue */
+	err = rvu_rx_cpt_iq_alloc(rvu, iq);
+	if (err)
+		goto detach_cptlf;
+
+	/* reset CPT LF */
+	cpt_rx_ipsec_lf_reset(rvu, blkaddr, slot);
+
+	/* Disable IQ */
+	cpt_rx_ipsec_lf_disable_iqueue(rvu, blkaddr, slot);
+
+	/* Set IQ base address */
+	otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_BASE,
+			 iq->dma_addr);
+	/* Set IQ size */
+	val = FIELD_PREP(CPT_LF_Q_SIZE_DIV40, RVU_CPT_SIZE_DIV40 +
+			 RVU_CPT_EXTRA_SIZE_DIV40);
+	otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot, CPT_LF_Q_SIZE, val);
+
+	/* Enable IQ */
+	cpt_rx_ipsec_lf_enable_iqueue(rvu, blkaddr, slot);
+
+	/* Set High priority */
+	rvu_rx_cpt_set_grp_pri_ilen(rvu, blkaddr, cptlf);
+
+	return 0;
+detach_cptlf:
+	rvu_rx_detach_cptlf(rvu);
+	return err;
+}
+
+static void rvu_cpt_rx_inline_cptlf_clean(struct rvu *rvu, int blkaddr,
+					  int slot)
+{
+	/* Disable IQ */
+	cpt_rx_ipsec_lf_disable_iqueue(rvu, blkaddr, slot);
+
+	/* Free Instruction Queue */
+	rvu_rx_cpt_iq_free(rvu, blkaddr);
+
+	/* Detach CPTLF */
+	rvu_rx_detach_cptlf(rvu);
+}
+
+static void rvu_cpt_save_rx_inline_lf_cfg(struct rvu *rvu,
+					  struct cpt_rx_inline_lf_cfg_msg *req)
+{
+	rvu->rvu_cpt.rx_cfg.sso_pf_func = req->sso_pf_func;
+	rvu->rvu_cpt.rx_cfg.param1 = req->param1;
+	rvu->rvu_cpt.rx_cfg.param2 = req->param2;
+	rvu->rvu_cpt.rx_cfg.opcode = req->opcode;
+	rvu->rvu_cpt.rx_cfg.credit = req->credit;
+	rvu->rvu_cpt.rx_cfg.credit_th = req->credit_th;
+	rvu->rvu_cpt.rx_cfg.bpid = req->bpid;
+	rvu->rvu_cpt.rx_cfg.ctx_ilen_valid = req->ctx_ilen_valid;
+	rvu->rvu_cpt.rx_cfg.ctx_ilen = req->ctx_ilen;
+}
+
+static void
+rvu_show_diff_cpt_rx_inline_lf_cfg(struct rvu *rvu,
+				   struct cpt_rx_inline_lf_cfg_msg *req)
+{
+	struct device *dev = rvu->dev;
+
+	if (rvu->rvu_cpt.rx_cfg.sso_pf_func != req->sso_pf_func)
+		dev_info(dev, "Mismatch RX inline config sso_pf_func Req %x Prog %x\n",
+			 req->sso_pf_func, rvu->rvu_cpt.rx_cfg.sso_pf_func);
+	if (rvu->rvu_cpt.rx_cfg.param1 != req->param1)
+		dev_info(dev, "Mismatch RX inline config param1 Req %x Prog %x\n",
+			 req->param1, rvu->rvu_cpt.rx_cfg.param1);
+	if (rvu->rvu_cpt.rx_cfg.param2 != req->param2)
+		dev_info(dev, "Mismatch RX inline config param2 Req %x Prog %x\n",
+			 req->param2, rvu->rvu_cpt.rx_cfg.param2);
+	if (rvu->rvu_cpt.rx_cfg.opcode != req->opcode)
+		dev_info(dev, "Mismatch RX inline config opcode Req %x Prog %x\n",
+			 req->opcode, rvu->rvu_cpt.rx_cfg.opcode);
+	if (rvu->rvu_cpt.rx_cfg.credit != req->credit)
+		dev_info(dev, "Mismatch RX inline config credit Req %x Prog %x\n",
+			 req->credit, rvu->rvu_cpt.rx_cfg.credit);
+	if (rvu->rvu_cpt.rx_cfg.credit_th != req->credit_th)
+		dev_info(dev, "Mismatch RX inline config credit_th Req %x Prog %x\n",
+			 req->credit_th, rvu->rvu_cpt.rx_cfg.credit_th);
+	if (rvu->rvu_cpt.rx_cfg.bpid != req->bpid)
+		dev_info(dev, "Mismatch RX inline config bpid Req %x Prog %x\n",
+			 req->bpid, rvu->rvu_cpt.rx_cfg.bpid);
+	if (rvu->rvu_cpt.rx_cfg.ctx_ilen != req->ctx_ilen)
+		dev_info(dev, "Mismatch RX inline config ctx_ilen Req %x Prog %x\n",
+			 req->ctx_ilen, rvu->rvu_cpt.rx_cfg.ctx_ilen);
+	if (rvu->rvu_cpt.rx_cfg.ctx_ilen_valid != req->ctx_ilen_valid)
+		dev_info(dev, "Mismatch RX inline config ctx_ilen_valid Req %x Prog %x\n",
+			 req->ctx_ilen_valid,
+			 rvu->rvu_cpt.rx_cfg.ctx_ilen_valid);
+}
+
+static void rvu_cpt_rx_inline_nix_cfg(struct rvu *rvu)
+{
+	struct nix_inline_ipsec_cfg nix_cfg;
+
+	nix_cfg.enable = 1;
+	nix_cfg.credit_th = rvu->rvu_cpt.rx_cfg.credit_th;
+	nix_cfg.bpid = rvu->rvu_cpt.rx_cfg.bpid;
+	if (!rvu->rvu_cpt.rx_cfg.credit || rvu->rvu_cpt.rx_cfg.credit >
+	    RVU_CPT_INST_QLEN_MSGS)
+		nix_cfg.cpt_credit = RVU_CPT_INST_QLEN_MSGS - 1;
+	else
+		nix_cfg.cpt_credit = rvu->rvu_cpt.rx_cfg.credit - 1;
+
+	nix_cfg.gen_cfg.egrp = rvu->rvu_cpt.inline_ipsec_egrp;
+	if (rvu->rvu_cpt.rx_cfg.opcode) {
+		nix_cfg.gen_cfg.opcode = rvu->rvu_cpt.rx_cfg.opcode;
+	} else {
+		if (is_rvu_otx2(rvu))
+			nix_cfg.gen_cfg.opcode = OTX2_CPT_INLINE_RX_OPCODE;
+		else
+			nix_cfg.gen_cfg.opcode = CN10K_CPT_INLINE_RX_OPCODE;
+	}
+
+	nix_cfg.gen_cfg.param1 = rvu->rvu_cpt.rx_cfg.param1;
+	nix_cfg.gen_cfg.param2 = rvu->rvu_cpt.rx_cfg.param2;
+	nix_cfg.inst_qsel.cpt_pf_func = rvu_get_pf(rvu->pdev, 0);
+	nix_cfg.inst_qsel.cpt_slot = 0;
+
+	nix_inline_ipsec_cfg(rvu, &nix_cfg, BLKADDR_NIX0);
+
+	if (is_block_implemented(rvu->hw, BLKADDR_CPT1))
+		nix_inline_ipsec_cfg(rvu, &nix_cfg, BLKADDR_NIX1);
+}
+
+static int rvu_cpt_rx_inline_ipsec_cfg(struct rvu *rvu)
+{
+	struct rvu_block *block;
+	struct cpt_inline_ipsec_cfg_msg req;
+	u16 pcifunc  = 0;
+	int cptlf;
+	int err;
+
+	memset(&req, 0, sizeof(struct cpt_inline_ipsec_cfg_msg));
+	req.sso_pf_func_ovrd = 0; // Add sysfs interface to set this
+	req.sso_pf_func = rvu->rvu_cpt.rx_cfg.sso_pf_func;
+	req.enable = 1;
+
+	block = &rvu->hw->block[BLKADDR_CPT0];
+	cptlf = rvu_get_lf(rvu, block, pcifunc, 0);
+	if (cptlf < 0)
+		return CPT_AF_ERR_LF_INVALID;
+
+	err = cpt_inline_ipsec_cfg_inbound(rvu, BLKADDR_CPT0, cptlf, &req);
+	if (err)
+		return err;
+
+	if (!is_block_implemented(rvu->hw, BLKADDR_CPT1))
+		return 0;
+
+	block = &rvu->hw->block[BLKADDR_CPT1];
+	cptlf = rvu_get_lf(rvu, block, pcifunc, 0);
+	if (cptlf < 0)
+		return CPT_AF_ERR_LF_INVALID;
+
+	return cpt_inline_ipsec_cfg_inbound(rvu, BLKADDR_CPT1, cptlf, &req);
+}
+
+static int rvu_cpt_rx_inline_cptlf_setup(struct rvu *rvu, int blkaddr, int slot)
+{
+	int err;
+
+	err = rvu_cpt_rx_inline_cptlf_init(rvu, blkaddr, slot);
+	if (err) {
+		dev_err(rvu->dev,
+			"CPTLF configuration failed for RX inline ipsec\n");
+		return err;
+	}
+
+	err = rvu_cpt_rx_inline_setup_irq(rvu, blkaddr, slot);
+	if (err) {
+		dev_err(rvu->dev,
+			"CPTLF Interrupt setup failed for RX inline ipsec\n");
+		rvu_cpt_rx_inline_cptlf_clean(rvu, blkaddr, slot);
+		return err;
+	}
+	return 0;
+}
+
+static void rvu_rx_cptlf_cleanup(struct rvu *rvu, int blkaddr, int slot)
+{
+	/* IRQ cleanup */
+	rvu_cpt_rx_inline_cleanup_irq(rvu, blkaddr, slot);
+
+	/* CPTLF cleanup */
+	rvu_cpt_rx_inline_cptlf_clean(rvu, blkaddr, slot);
+}
+
+int rvu_mbox_handler_cpt_rx_inline_lf_cfg(struct rvu *rvu,
+					  struct cpt_rx_inline_lf_cfg_msg *req,
+					  struct msg_rsp *rsp)
+{
+	u8 egrp = OTX2_CPT_INVALID_CRYPTO_ENG_GRP;
+	int err;
+	int i;
+
+	mutex_lock(&rvu->rvu_cpt.lock);
+	if (rvu->rvu_cpt.rx_initialized) {
+		dev_info(rvu->dev, "Inline RX CPT already initialized\n");
+		rvu_show_diff_cpt_rx_inline_lf_cfg(rvu, req);
+		err = 0;
+		goto unlock;
+	}
+
+	/* Get Inline Ipsec Engine Group */
+	for (i = 0; i < OTX2_CPT_MAX_ENG_TYPES; i++) {
+		if (rvu->rvu_cpt.eng_grp[i].eng_type == OTX2_CPT_IE_TYPES) {
+			egrp = rvu->rvu_cpt.eng_grp[i].grp_num;
+			break;
+		}
+	}
+
+	if (egrp == OTX2_CPT_INVALID_CRYPTO_ENG_GRP) {
+		dev_err(rvu->dev,
+			"Engine group for inline ipsec not available\n");
+		err = -ENODEV;
+		goto unlock;
+	}
+	rvu->rvu_cpt.inline_ipsec_egrp = egrp;
+
+	rvu_cpt_save_rx_inline_lf_cfg(rvu, req);
+
+	err = rvu_cpt_rx_inline_cptlf_setup(rvu, BLKADDR_CPT0, 0);
+	if (err)
+		goto unlock;
+
+	if (is_block_implemented(rvu->hw, BLKADDR_CPT1)) {
+		err = rvu_cpt_rx_inline_cptlf_setup(rvu, BLKADDR_CPT1, 0);
+		if (err)
+			goto cptlf_cleanup;
+	}
+
+	rvu_cpt_rx_inline_nix_cfg(rvu);
+
+	err = rvu_cpt_rx_inline_ipsec_cfg(rvu);
+	if (err)
+		goto cptlf1_cleanup;
+
+	rvu->rvu_cpt.rx_initialized = true;
+	mutex_unlock(&rvu->rvu_cpt.lock);
+	return 0;
+
+cptlf1_cleanup:
+	rvu_rx_cptlf_cleanup(rvu, BLKADDR_CPT1, 0);
+cptlf_cleanup:
+	rvu_rx_cptlf_cleanup(rvu, BLKADDR_CPT0, 0);
+unlock:
+	mutex_unlock(&rvu->rvu_cpt.lock);
+	return err;
+}
+
 #define MAX_RXC_ICB_CNT  GENMASK_ULL(40, 32)
 
 int rvu_cpt_init(struct rvu *rvu)
@@ -1336,5 +1899,6 @@ int rvu_cpt_init(struct rvu *rvu)
 
 	spin_lock_init(&rvu->cpt_intr_lock);
 
+	mutex_init(&rvu->rvu_cpt.lock);
 	return 0;
 }
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
new file mode 100644
index 000000000000..4b57c7038d6c
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Marvell AF CPT driver
+ *
+ * Copyright (C) 2022 Marvell.
+ */
+
+#ifndef RVU_CPT_H
+#define RVU_CPT_H
+
+#include <linux/types.h>
+
+/* CPT instruction size in bytes */
+#define RVU_CPT_INST_SIZE	64
+
+/* CPT instruction (CPT_INST_S) queue length */
+#define RVU_CPT_INST_QLEN	8200
+
+/* CPT instruction queue size passed to HW is in units of
+ * 40*CPT_INST_S messages.
+ */
+#define RVU_CPT_SIZE_DIV40 (RVU_CPT_INST_QLEN / 40)
+
+/* CPT instruction and pending queues length in CPT_INST_S messages */
+#define RVU_CPT_INST_QLEN_MSGS	((RVU_CPT_SIZE_DIV40 - 1) * 40)
+
+/* CPT needs 320 free entries */
+#define RVU_CPT_INST_QLEN_EXTRA_BYTES	(320 * RVU_CPT_INST_SIZE)
+#define RVU_CPT_EXTRA_SIZE_DIV40	(320 / 40)
+
+/* CPT instruction queue length in bytes */
+#define RVU_CPT_INST_QLEN_BYTES                                               \
+		((RVU_CPT_SIZE_DIV40 * 40 * RVU_CPT_INST_SIZE) +             \
+		RVU_CPT_INST_QLEN_EXTRA_BYTES)
+
+/* CPT instruction group queue length in bytes */
+#define RVU_CPT_INST_GRP_QLEN_BYTES                                           \
+		((RVU_CPT_SIZE_DIV40 + RVU_CPT_EXTRA_SIZE_DIV40) * 16)
+
+/* CPT FC length in bytes */
+#define RVU_CPT_Q_FC_LEN 128
+
+/* CPT LF_Q_SIZE Register */
+#define CPT_LF_Q_SIZE_DIV40 GENMASK_ULL(14, 0)
+
+/* CPT invalid engine group num */
+#define OTX2_CPT_INVALID_CRYPTO_ENG_GRP 0xFF
+
+/* Fastpath ipsec opcode with inplace processing */
+#define OTX2_CPT_INLINE_RX_OPCODE (0x26 | (1 << 6))
+#define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6))
+
+/* Calculate CPT register offset */
+#define CPT_RVU_FUNC_ADDR_S(blk, slot, offs) \
+		(((blk) << 20) | ((slot) << 12) | (offs))
+
+static inline void otx2_cpt_write64(void __iomem *reg_base, u64 blk, u64 slot,
+				    u64 offs, u64 val)
+{
+	writeq_relaxed(val, reg_base + CPT_RVU_FUNC_ADDR_S(blk, slot, offs));
+}
+
+static inline u64 otx2_cpt_read64(void __iomem *reg_base, u64 blk, u64 slot,
+				  u64 offs)
+{
+	return readq_relaxed(reg_base + CPT_RVU_FUNC_ADDR_S(blk, slot, offs));
+}
+#endif // RVU_CPT_H
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 60db1f616cc8..91af1ada11c2 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -5490,8 +5490,8 @@ int rvu_mbox_handler_nix_lso_format_cfg(struct rvu *rvu,
 #define CPT_INST_CREDIT_BPID  GENMASK_ULL(30, 22)
 #define CPT_INST_CREDIT_CNT   GENMASK_ULL(21, 0)
 
-static void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req,
-				 int blkaddr)
+void nix_inline_ipsec_cfg(struct rvu *rvu, struct nix_inline_ipsec_cfg *req,
+			  int blkaddr)
 {
 	u8 cpt_idx, cpt_blkaddr;
 	u64 val;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index 62cdc714ba57..d92c154b08cf 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -509,6 +509,9 @@
 #define CPT_AF_EXE_EPCI_OUTBX_CNT(a)    (0x25000ull | (u64)(a) << 3)
 #define CPT_AF_EXEX_UCODE_BASE(a)       (0x26000ull | (u64)(a) << 3)
 #define CPT_AF_LFX_CTL(a)               (0x27000ull | (u64)(a) << 3)
+#define CPT_AF_LFX_CTL_HIGH_PRI		BIT_ULL(0)
+#define CPT_AF_LFX_CTL_CTX_ILEN		GENMASK_ULL(19, 17)
+#define CPT_AF_LFX_CTL_EGRP		GENMASK_ULL(55, 48)
 #define CPT_AF_LFX_CTL2(a)              (0x29000ull | (u64)(a) << 3)
 #define CPT_AF_CPTCLK_CNT               (0x2a000)
 #define CPT_AF_PF_FUNC                  (0x2b000)
@@ -562,7 +565,19 @@
 #define CPT_AF_LF_SSO_PF_FUNC_SHIFT 32
 
 #define CPT_LF_CTL                      0x10
+#define CPT_LF_CTL_ENQ_ENA		BIT_ULL(0)
 #define CPT_LF_INPROG                   0x40
+#define CPT_LF_INPROG_EXEC_ENABLE	BIT_ULL(16)
+#define CPT_LF_MISC_INT                 0xb0
+#define CPT_LF_MISC_INT_NQERR		BIT_ULL(1)
+#define CPT_LF_MISC_INT_IRDE		BIT_ULL(2)
+#define CPT_LF_MISC_INT_NWRP		BIT_ULL(3)
+#define CPT_LF_MISC_INT_HWERR		BIT_ULL(5)
+#define CPT_LF_MISC_INT_FAULT		BIT_ULL(6)
+#define CPT_LF_MISC_INT_MASK            0x6e
+#define CPT_LF_MISC_INT_ENA_W1S         0xd0
+#define CPT_LF_MISC_INT_ENA_W1C         0xe0
+#define CPT_LF_Q_BASE                   0xf0
 #define CPT_LF_Q_SIZE                   0x100
 #define CPT_LF_Q_INST_PTR               0x110
 #define CPT_LF_Q_GRP_PTR                0x120
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 03/14] octeontx2-af: Setup Large Memory Transaction for crypto
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 01/14] crypto: octeontx2: Share engine group info with AF driver Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 02/14] octeontx2-af: Configure crypto hardware for inline ipsec Tanmay Jagdale
@ 2025-08-19  2:14 ` Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 04/14] octeontx2-af: Handle inbound inline ipsec config in AF Tanmay Jagdale
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:14 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

From: Bharat Bhushan <bbhushan2@marvell.com>

Large Memory Transaction store (LMTST) operation  is required
for enqueuing workto CPT hardware. An LMTST operation makes
one or more 128-byte write operation to normal, cacheable
memory region. This patch setup LMTST memory region for
enqueuing work to CPT hardware.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- None

Changes in V3:
- None

Changes in V2:
- None

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-4-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-4-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-4-tanmay@marvell.com/

 .../net/ethernet/marvell/octeontx2/af/rvu.c   |  1 +
 .../net/ethernet/marvell/octeontx2/af/rvu.h   |  7 +++
 .../ethernet/marvell/octeontx2/af/rvu_cpt.c   | 51 +++++++++++++++++++
 .../ethernet/marvell/octeontx2/af/rvu_cpt.h   |  4 ++
 4 files changed, 63 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index a4e9430acba9..250d9e34b91e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -726,6 +726,7 @@ static void rvu_free_hw_resources(struct rvu *rvu)
 	rvu_npa_freemem(rvu);
 	rvu_npc_freemem(rvu);
 	rvu_nix_freemem(rvu);
+	rvu_cpt_freemem(rvu);
 
 	/* Free block LF bitmaps */
 	for (id = 0; id < BLK_COUNT; id++) {
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 9f982c9f5953..1054a4ee19e0 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -602,6 +602,12 @@ struct rvu_cpt {
 	struct rvu_cpt_inst_queue cpt0_iq;
 	struct rvu_cpt_inst_queue cpt1_iq;
 	struct rvu_cpt_rx_inline_lf_cfg rx_cfg;
+
+	/* CPT LMTST */
+	void *lmt_base;
+	u64 lmt_addr;
+	size_t lmt_size;
+	dma_addr_t lmt_iova;
 };
 
 struct rvu {
@@ -1149,6 +1155,7 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf,
 			int slot);
 int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc);
 int rvu_cpt_init(struct rvu *rvu);
+void rvu_cpt_freemem(struct rvu *rvu);
 
 #define NDC_AF_BANK_MASK       GENMASK_ULL(7, 0)
 #define NDC_AF_BANK_LINE_MASK  GENMASK_ULL(31, 16)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
index e1b170919ba9..84ca775b1871 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
@@ -1875,10 +1875,46 @@ int rvu_mbox_handler_cpt_rx_inline_lf_cfg(struct rvu *rvu,
 
 #define MAX_RXC_ICB_CNT  GENMASK_ULL(40, 32)
 
+static int rvu_cpt_lmt_init(struct rvu *rvu)
+{
+	struct lmtst_tbl_setup_req req;
+	dma_addr_t iova;
+	void *base;
+	int size;
+	int err;
+
+	if (is_rvu_otx2(rvu))
+		return 0;
+
+	memset(&req, 0, sizeof(struct lmtst_tbl_setup_req));
+
+	size = LMT_LINE_SIZE * LMT_BURST_SIZE + OTX2_ALIGN;
+	base = dma_alloc_attrs(rvu->dev, size, &iova, GFP_ATOMIC,
+			       DMA_ATTR_FORCE_CONTIGUOUS);
+	if (!base)
+		return -ENOMEM;
+
+	req.lmt_iova = ALIGN(iova, OTX2_ALIGN);
+	req.use_local_lmt_region = true;
+	err = rvu_mbox_handler_lmtst_tbl_setup(rvu, &req, NULL);
+	if (err) {
+		dma_free_attrs(rvu->dev, size, base, iova,
+			       DMA_ATTR_FORCE_CONTIGUOUS);
+		return err;
+	}
+
+	rvu->rvu_cpt.lmt_addr = (__force u64)PTR_ALIGN(base, OTX2_ALIGN);
+	rvu->rvu_cpt.lmt_base = base;
+	rvu->rvu_cpt.lmt_size = size;
+	rvu->rvu_cpt.lmt_iova = iova;
+	return 0;
+}
+
 int rvu_cpt_init(struct rvu *rvu)
 {
 	struct rvu_hwinfo *hw = rvu->hw;
 	u64 reg_val;
+	int ret;
 
 	/* Retrieve CPT PF number */
 	rvu->cpt_pf_num = get_cpt_pf_num(rvu);
@@ -1899,6 +1935,21 @@ int rvu_cpt_init(struct rvu *rvu)
 
 	spin_lock_init(&rvu->cpt_intr_lock);
 
+	ret = rvu_cpt_lmt_init(rvu);
+	if (ret)
+		return ret;
+
 	mutex_init(&rvu->rvu_cpt.lock);
 	return 0;
 }
+
+void rvu_cpt_freemem(struct rvu *rvu)
+{
+	if (is_rvu_otx2(rvu))
+		return;
+
+	if (rvu->rvu_cpt.lmt_base)
+		dma_free_attrs(rvu->dev, rvu->rvu_cpt.lmt_size,
+			       rvu->rvu_cpt.lmt_base, rvu->rvu_cpt.lmt_iova,
+			       DMA_ATTR_FORCE_CONTIGUOUS);
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
index 4b57c7038d6c..e6fa247a03ba 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.h
@@ -49,6 +49,10 @@
 #define OTX2_CPT_INLINE_RX_OPCODE (0x26 | (1 << 6))
 #define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6))
 
+/* CPT LMTST */
+#define LMT_LINE_SIZE   128 /* LMT line size in bytes */
+#define LMT_BURST_SIZE  32  /* 32 LMTST lines for burst */
+
 /* Calculate CPT register offset */
 #define CPT_RVU_FUNC_ADDR_S(blk, slot, offs) \
 		(((blk) << 20) | ((slot) << 12) | (offs))
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 04/14] octeontx2-af: Handle inbound inline ipsec config in AF
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (2 preceding siblings ...)
  2025-08-19  2:14 ` [PATCH net-next v4 03/14] octeontx2-af: Setup Large Memory Transaction for crypto Tanmay Jagdale
@ 2025-08-19  2:14 ` Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 05/14] octeontx2-af: Add support for CPT second pass Tanmay Jagdale
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:14 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

From: Bharat Bhushan <bbhushan2@marvell.com>

Now CPT context flush can be handled in AF as CPT LF
can be attached to it. With that AF driver can completely
handle inbound inline ipsec configuration mailbox, so
forward this mailbox to AF driver and remove all the
related code from CPT driver.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in v4:
- None

Changes in V3:
- None

Changes in V2:
- RCT order definition
- Squashed patch 05/15 from v1 to avoid unused function warning

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-5-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-5-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-5-tanmay@marvell.com/

 .../marvell/octeontx2/otx2_cpt_common.h       |   1 -
 drivers/crypto/marvell/octeontx2/otx2_cptpf.h |  10 -
 .../marvell/octeontx2/otx2_cptpf_main.c       |  46 ---
 .../marvell/octeontx2/otx2_cptpf_mbox.c       | 281 +-----------------
 .../net/ethernet/marvell/octeontx2/af/mbox.c  |   3 -
 .../net/ethernet/marvell/octeontx2/af/mbox.h  |  11 -
 .../ethernet/marvell/octeontx2/af/rvu_cpt.c   |  71 ++---
 .../ethernet/marvell/octeontx2/af/rvu_reg.h   |   1 +
 8 files changed, 34 insertions(+), 390 deletions(-)

diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
index 89d4dfbb1e8e..f8e32e98eff8 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
+++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h
@@ -32,7 +32,6 @@
 #define BAD_OTX2_CPT_ENG_TYPE OTX2_CPT_MAX_ENG_TYPES
 
 /* Take mbox id from end of CPT mbox range in AF (range 0xA00 - 0xBFF) */
-#define MBOX_MSG_RX_INLINE_IPSEC_LF_CFG 0xBFE
 #define MBOX_MSG_GET_ENG_GRP_NUM        0xBFF
 #define MBOX_MSG_GET_CAPS               0xBFD
 #define MBOX_MSG_GET_KVF_LIMITS         0xBFC
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h
index e5859a1e1c60..b7d1298e2b85 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h
@@ -41,9 +41,6 @@ struct otx2_cptpf_dev {
 	struct work_struct	afpf_mbox_work;
 	struct workqueue_struct *afpf_mbox_wq;
 
-	struct otx2_mbox	afpf_mbox_up;
-	struct work_struct	afpf_mbox_up_work;
-
 	/* VF <=> PF mbox */
 	struct otx2_mbox	vfpf_mbox;
 	struct workqueue_struct *vfpf_mbox_wq;
@@ -56,10 +53,8 @@ struct otx2_cptpf_dev {
 	u8 pf_id;               /* RVU PF number */
 	u8 max_vfs;		/* Maximum number of VFs supported by CPT */
 	u8 enabled_vfs;		/* Number of enabled VFs */
-	u8 sso_pf_func_ovrd;	/* SSO PF_FUNC override bit */
 	u8 kvf_limits;		/* Kernel crypto limits */
 	bool has_cpt1;
-	u8 rsrc_req_blkaddr;
 
 	/* Devlink */
 	struct devlink *dl;
@@ -67,12 +62,7 @@ struct otx2_cptpf_dev {
 
 irqreturn_t otx2_cptpf_afpf_mbox_intr(int irq, void *arg);
 void otx2_cptpf_afpf_mbox_handler(struct work_struct *work);
-void otx2_cptpf_afpf_mbox_up_handler(struct work_struct *work);
 irqreturn_t otx2_cptpf_vfpf_mbox_intr(int irq, void *arg);
 void otx2_cptpf_vfpf_mbox_handler(struct work_struct *work);
 
-int otx2_inline_cptlf_setup(struct otx2_cptpf_dev *cptpf,
-			    struct otx2_cptlfs_info *lfs, u8 egrp, int num_lfs);
-void otx2_inline_cptlf_cleanup(struct otx2_cptlfs_info *lfs);
-
 #endif /* __OTX2_CPTPF_H */
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
index 1bceabe5f0e2..4791cd460eaa 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c
@@ -13,7 +13,6 @@
 #define OTX2_CPT_DRV_NAME    "rvu_cptpf"
 #define OTX2_CPT_DRV_STRING  "Marvell RVU CPT Physical Function Driver"
 
-#define CPT_UC_RID_CN9K_B0   1
 #define CPT_UC_RID_CN10K_A   4
 #define CPT_UC_RID_CN10K_B   5
 
@@ -477,19 +476,10 @@ static int cptpf_afpf_mbox_init(struct otx2_cptpf_dev *cptpf)
 	if (err)
 		goto error;
 
-	err = otx2_mbox_init(&cptpf->afpf_mbox_up, cptpf->afpf_mbox_base,
-			     pdev, cptpf->reg_base, MBOX_DIR_PFAF_UP, 1);
-	if (err)
-		goto mbox_cleanup;
-
 	INIT_WORK(&cptpf->afpf_mbox_work, otx2_cptpf_afpf_mbox_handler);
-	INIT_WORK(&cptpf->afpf_mbox_up_work, otx2_cptpf_afpf_mbox_up_handler);
 	mutex_init(&cptpf->lock);
-
 	return 0;
 
-mbox_cleanup:
-	otx2_mbox_destroy(&cptpf->afpf_mbox);
 error:
 	destroy_workqueue(cptpf->afpf_mbox_wq);
 	return err;
@@ -499,33 +489,6 @@ static void cptpf_afpf_mbox_destroy(struct otx2_cptpf_dev *cptpf)
 {
 	destroy_workqueue(cptpf->afpf_mbox_wq);
 	otx2_mbox_destroy(&cptpf->afpf_mbox);
-	otx2_mbox_destroy(&cptpf->afpf_mbox_up);
-}
-
-static ssize_t sso_pf_func_ovrd_show(struct device *dev,
-				     struct device_attribute *attr, char *buf)
-{
-	struct otx2_cptpf_dev *cptpf = dev_get_drvdata(dev);
-
-	return sprintf(buf, "%d\n", cptpf->sso_pf_func_ovrd);
-}
-
-static ssize_t sso_pf_func_ovrd_store(struct device *dev,
-				      struct device_attribute *attr,
-				      const char *buf, size_t count)
-{
-	struct otx2_cptpf_dev *cptpf = dev_get_drvdata(dev);
-	u8 sso_pf_func_ovrd;
-
-	if (!(cptpf->pdev->revision == CPT_UC_RID_CN9K_B0))
-		return count;
-
-	if (kstrtou8(buf, 0, &sso_pf_func_ovrd))
-		return -EINVAL;
-
-	cptpf->sso_pf_func_ovrd = sso_pf_func_ovrd;
-
-	return count;
 }
 
 static ssize_t kvf_limits_show(struct device *dev,
@@ -558,11 +521,9 @@ static ssize_t kvf_limits_store(struct device *dev,
 }
 
 static DEVICE_ATTR_RW(kvf_limits);
-static DEVICE_ATTR_RW(sso_pf_func_ovrd);
 
 static struct attribute *cptpf_attrs[] = {
 	&dev_attr_kvf_limits.attr,
-	&dev_attr_sso_pf_func_ovrd.attr,
 	NULL
 };
 
@@ -841,13 +802,6 @@ static void otx2_cptpf_remove(struct pci_dev *pdev)
 	cptpf_sriov_disable(pdev);
 	otx2_cpt_unregister_dl(cptpf);
 
-	/* Cleanup Inline CPT LF's if attached */
-	if (cptpf->lfs.lfs_num)
-		otx2_inline_cptlf_cleanup(&cptpf->lfs);
-
-	if (cptpf->cpt1_lfs.lfs_num)
-		otx2_inline_cptlf_cleanup(&cptpf->cpt1_lfs);
-
 	/* Delete sysfs entry created for kernel VF limits */
 	sysfs_remove_group(&pdev->dev.kobj, &cptpf_sysfs_group);
 	/* Cleanup engine groups */
diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
index 3ff3a49bd82b..326f5c802242 100644
--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
+++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c
@@ -5,20 +5,6 @@
 #include "otx2_cptpf.h"
 #include "rvu_reg.h"
 
-/* Fastpath ipsec opcode with inplace processing */
-#define CPT_INLINE_RX_OPCODE (0x26 | (1 << 6))
-#define CN10K_CPT_INLINE_RX_OPCODE (0x29 | (1 << 6))
-
-#define cpt_inline_rx_opcode(pdev)                      \
-({                                                      \
-	u8 opcode;                                      \
-	if (is_dev_otx2(pdev))                          \
-		opcode = CPT_INLINE_RX_OPCODE;          \
-	else                                            \
-		opcode = CN10K_CPT_INLINE_RX_OPCODE;    \
-	(opcode);                                       \
-})
-
 /*
  * CPT PF driver version, It will be incremented by 1 for every feature
  * addition in CPT mailbox messages.
@@ -126,182 +112,6 @@ static int handle_msg_kvf_limits(struct otx2_cptpf_dev *cptpf,
 	return 0;
 }
 
-static int send_inline_ipsec_inbound_msg(struct otx2_cptpf_dev *cptpf,
-					 int sso_pf_func, u8 slot)
-{
-	struct cpt_inline_ipsec_cfg_msg *req;
-	struct pci_dev *pdev = cptpf->pdev;
-
-	req = (struct cpt_inline_ipsec_cfg_msg *)
-	      otx2_mbox_alloc_msg_rsp(&cptpf->afpf_mbox, 0,
-				      sizeof(*req), sizeof(struct msg_rsp));
-	if (req == NULL) {
-		dev_err(&pdev->dev, "RVU MBOX failed to get message.\n");
-		return -EFAULT;
-	}
-	memset(req, 0, sizeof(*req));
-	req->hdr.id = MBOX_MSG_CPT_INLINE_IPSEC_CFG;
-	req->hdr.sig = OTX2_MBOX_REQ_SIG;
-	req->hdr.pcifunc = OTX2_CPT_RVU_PFFUNC(cptpf->pdev, cptpf->pf_id, 0);
-	req->dir = CPT_INLINE_INBOUND;
-	req->slot = slot;
-	req->sso_pf_func_ovrd = cptpf->sso_pf_func_ovrd;
-	req->sso_pf_func = sso_pf_func;
-	req->enable = 1;
-
-	return otx2_cpt_send_mbox_msg(&cptpf->afpf_mbox, pdev);
-}
-
-static int rx_inline_ipsec_lf_cfg(struct otx2_cptpf_dev *cptpf, u8 egrp,
-				  struct otx2_cpt_rx_inline_lf_cfg *req)
-{
-	struct nix_inline_ipsec_cfg *nix_req;
-	struct pci_dev *pdev = cptpf->pdev;
-	int ret;
-
-	nix_req = (struct nix_inline_ipsec_cfg *)
-		   otx2_mbox_alloc_msg_rsp(&cptpf->afpf_mbox, 0,
-					   sizeof(*nix_req),
-					   sizeof(struct msg_rsp));
-	if (nix_req == NULL) {
-		dev_err(&pdev->dev, "RVU MBOX failed to get message.\n");
-		return -EFAULT;
-	}
-	memset(nix_req, 0, sizeof(*nix_req));
-	nix_req->hdr.id = MBOX_MSG_NIX_INLINE_IPSEC_CFG;
-	nix_req->hdr.sig = OTX2_MBOX_REQ_SIG;
-	nix_req->enable = 1;
-	nix_req->credit_th = req->credit_th;
-	nix_req->bpid = req->bpid;
-	if (!req->credit || req->credit > OTX2_CPT_INST_QLEN_MSGS)
-		nix_req->cpt_credit = OTX2_CPT_INST_QLEN_MSGS - 1;
-	else
-		nix_req->cpt_credit = req->credit - 1;
-	nix_req->gen_cfg.egrp = egrp;
-	if (req->opcode)
-		nix_req->gen_cfg.opcode = req->opcode;
-	else
-		nix_req->gen_cfg.opcode = cpt_inline_rx_opcode(pdev);
-	nix_req->gen_cfg.param1 = req->param1;
-	nix_req->gen_cfg.param2 = req->param2;
-	nix_req->inst_qsel.cpt_pf_func =
-		OTX2_CPT_RVU_PFFUNC(cptpf->pdev, cptpf->pf_id, 0);
-	nix_req->inst_qsel.cpt_slot = 0;
-	ret = otx2_cpt_send_mbox_msg(&cptpf->afpf_mbox, pdev);
-	if (ret)
-		return ret;
-
-	if (cptpf->has_cpt1) {
-		ret = send_inline_ipsec_inbound_msg(cptpf, req->sso_pf_func, 1);
-		if (ret)
-			return ret;
-	}
-
-	return send_inline_ipsec_inbound_msg(cptpf, req->sso_pf_func, 0);
-}
-
-int
-otx2_inline_cptlf_setup(struct otx2_cptpf_dev *cptpf,
-			struct otx2_cptlfs_info *lfs, u8 egrp, int num_lfs)
-{
-	int ret;
-
-	ret = otx2_cptlf_init(lfs, 1 << egrp, OTX2_CPT_QUEUE_HI_PRIO, 1);
-	if (ret) {
-		dev_err(&cptpf->pdev->dev,
-			"LF configuration failed for RX inline ipsec.\n");
-		return ret;
-	}
-
-	/* Get msix offsets for attached LFs */
-	ret = otx2_cpt_msix_offset_msg(lfs);
-	if (ret)
-		goto cleanup_lf;
-
-	/* Register for CPT LF Misc interrupts */
-	ret = otx2_cptlf_register_misc_interrupts(lfs);
-	if (ret)
-		goto free_irq;
-
-	return 0;
-free_irq:
-	otx2_cptlf_unregister_misc_interrupts(lfs);
-cleanup_lf:
-	otx2_cptlf_shutdown(lfs);
-	return ret;
-}
-
-void
-otx2_inline_cptlf_cleanup(struct otx2_cptlfs_info *lfs)
-{
-	/* Unregister misc interrupt */
-	otx2_cptlf_unregister_misc_interrupts(lfs);
-
-	/* Cleanup LFs */
-	otx2_cptlf_shutdown(lfs);
-}
-
-static int handle_msg_rx_inline_ipsec_lf_cfg(struct otx2_cptpf_dev *cptpf,
-					     struct mbox_msghdr *req)
-{
-	struct otx2_cpt_rx_inline_lf_cfg *cfg_req;
-	int num_lfs = 1, ret;
-	u8 egrp;
-
-	cfg_req = (struct otx2_cpt_rx_inline_lf_cfg *)req;
-	if (cptpf->lfs.lfs_num) {
-		dev_err(&cptpf->pdev->dev,
-			"LF is already configured for RX inline ipsec.\n");
-		return -EEXIST;
-	}
-	/*
-	 * Allow LFs to execute requests destined to only grp IE_TYPES and
-	 * set queue priority of each LF to high
-	 */
-	egrp = otx2_cpt_get_eng_grp(&cptpf->eng_grps, OTX2_CPT_IE_TYPES);
-	if (egrp == OTX2_CPT_INVALID_CRYPTO_ENG_GRP) {
-		dev_err(&cptpf->pdev->dev,
-			"Engine group for inline ipsec is not available\n");
-		return -ENOENT;
-	}
-
-	cptpf->lfs.global_slot = 0;
-	cptpf->lfs.ctx_ilen_ovrd = cfg_req->ctx_ilen_valid;
-	cptpf->lfs.ctx_ilen = cfg_req->ctx_ilen;
-
-	ret = otx2_inline_cptlf_setup(cptpf, &cptpf->lfs, egrp, num_lfs);
-	if (ret) {
-		dev_err(&cptpf->pdev->dev, "Inline-Ipsec CPT0 LF setup failed.\n");
-		return ret;
-	}
-
-	if (cptpf->has_cpt1) {
-		cptpf->rsrc_req_blkaddr = BLKADDR_CPT1;
-		cptpf->cpt1_lfs.global_slot = num_lfs;
-		cptpf->cpt1_lfs.ctx_ilen_ovrd = cfg_req->ctx_ilen_valid;
-		cptpf->cpt1_lfs.ctx_ilen = cfg_req->ctx_ilen;
-		ret = otx2_inline_cptlf_setup(cptpf, &cptpf->cpt1_lfs, egrp,
-					      num_lfs);
-		if (ret) {
-			dev_err(&cptpf->pdev->dev, "Inline CPT1 LF setup failed.\n");
-			goto lf_cleanup;
-		}
-		cptpf->rsrc_req_blkaddr = 0;
-	}
-
-	ret = rx_inline_ipsec_lf_cfg(cptpf, egrp, cfg_req);
-	if (ret)
-		goto lf1_cleanup;
-
-	return 0;
-
-lf1_cleanup:
-	otx2_inline_cptlf_cleanup(&cptpf->cpt1_lfs);
-lf_cleanup:
-	otx2_inline_cptlf_cleanup(&cptpf->lfs);
-	return ret;
-}
-
 static int cptpf_handle_vf_req(struct otx2_cptpf_dev *cptpf,
 			       struct otx2_cptvf_info *vf,
 			       struct mbox_msghdr *req, int size)
@@ -322,9 +132,6 @@ static int cptpf_handle_vf_req(struct otx2_cptpf_dev *cptpf,
 	case MBOX_MSG_GET_KVF_LIMITS:
 		err = handle_msg_kvf_limits(cptpf, vf, req);
 		break;
-	case MBOX_MSG_RX_INLINE_IPSEC_LF_CFG:
-		err = handle_msg_rx_inline_ipsec_lf_cfg(cptpf, req);
-		break;
 
 	default:
 		err = forward_to_af(cptpf, vf, req, size);
@@ -417,28 +224,14 @@ void otx2_cptpf_vfpf_mbox_handler(struct work_struct *work)
 irqreturn_t otx2_cptpf_afpf_mbox_intr(int __always_unused irq, void *arg)
 {
 	struct otx2_cptpf_dev *cptpf = arg;
-	struct otx2_mbox_dev *mdev;
-	struct otx2_mbox *mbox;
-	struct mbox_hdr *hdr;
 	u64 intr;
 
 	/* Read the interrupt bits */
 	intr = otx2_cpt_read64(cptpf->reg_base, BLKADDR_RVUM, 0, RVU_PF_INT);
 
 	if (intr & 0x1ULL) {
-		mbox = &cptpf->afpf_mbox;
-		mdev = &mbox->dev[0];
-		hdr = mdev->mbase + mbox->rx_start;
-		if (hdr->num_msgs)
-			/* Schedule work queue function to process the MBOX request */
-			queue_work(cptpf->afpf_mbox_wq, &cptpf->afpf_mbox_work);
-
-		mbox = &cptpf->afpf_mbox_up;
-		mdev = &mbox->dev[0];
-		hdr = mdev->mbase + mbox->rx_start;
-		if (hdr->num_msgs)
-			/* Schedule work queue function to process the MBOX request */
-			queue_work(cptpf->afpf_mbox_wq, &cptpf->afpf_mbox_up_work);
+		/* Schedule work queue function to process the MBOX request */
+		queue_work(cptpf->afpf_mbox_wq, &cptpf->afpf_mbox_work);
 		/* Clear and ack the interrupt */
 		otx2_cpt_write64(cptpf->reg_base, BLKADDR_RVUM, 0, RVU_PF_INT,
 				 0x1ULL);
@@ -464,8 +257,6 @@ static void process_afpf_mbox_msg(struct otx2_cptpf_dev *cptpf,
 			msg->sig, msg->id);
 		return;
 	}
-	if (cptpf->rsrc_req_blkaddr == BLKADDR_CPT1)
-		lfs = &cptpf->cpt1_lfs;
 
 	switch (msg->id) {
 	case MBOX_MSG_READY:
@@ -592,71 +383,3 @@ void otx2_cptpf_afpf_mbox_handler(struct work_struct *work)
 	}
 	otx2_mbox_reset(afpf_mbox, 0);
 }
-
-static void handle_msg_cpt_inst_lmtst(struct otx2_cptpf_dev *cptpf,
-				      struct mbox_msghdr *msg)
-{
-	struct cpt_inst_lmtst_req *req = (struct cpt_inst_lmtst_req *)msg;
-	struct otx2_cptlfs_info *lfs = &cptpf->lfs;
-	struct msg_rsp *rsp;
-
-	if (cptpf->lfs.lfs_num)
-		lfs->ops->send_cmd((union otx2_cpt_inst_s *)req->inst, 1,
-				   &lfs->lf[0]);
-
-	rsp = (struct msg_rsp *)otx2_mbox_alloc_msg(&cptpf->afpf_mbox_up, 0,
-						    sizeof(*rsp));
-	if (!rsp)
-		return;
-
-	rsp->hdr.id = msg->id;
-	rsp->hdr.sig = OTX2_MBOX_RSP_SIG;
-	rsp->hdr.pcifunc = 0;
-	rsp->hdr.rc = 0;
-}
-
-static void process_afpf_mbox_up_msg(struct otx2_cptpf_dev *cptpf,
-				     struct mbox_msghdr *msg)
-{
-	if (msg->id >= MBOX_MSG_MAX) {
-		dev_err(&cptpf->pdev->dev,
-			"MBOX msg with unknown ID %d\n", msg->id);
-		return;
-	}
-
-	switch (msg->id) {
-	case MBOX_MSG_CPT_INST_LMTST:
-		handle_msg_cpt_inst_lmtst(cptpf, msg);
-		break;
-	default:
-		otx2_reply_invalid_msg(&cptpf->afpf_mbox_up, 0, 0, msg->id);
-	}
-}
-
-void otx2_cptpf_afpf_mbox_up_handler(struct work_struct *work)
-{
-	struct otx2_cptpf_dev *cptpf;
-	struct otx2_mbox_dev *mdev;
-	struct mbox_hdr *rsp_hdr;
-	struct mbox_msghdr *msg;
-	struct otx2_mbox *mbox;
-	int offset, i;
-
-	cptpf = container_of(work, struct otx2_cptpf_dev, afpf_mbox_up_work);
-	mbox = &cptpf->afpf_mbox_up;
-	mdev = &mbox->dev[0];
-	/* Sync mbox data into memory */
-	smp_wmb();
-
-	rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
-	offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
-
-	for (i = 0; i < rsp_hdr->num_msgs; i++) {
-		msg = (struct mbox_msghdr *)(mdev->mbase + offset);
-
-		process_afpf_mbox_up_msg(cptpf, msg);
-
-		offset = mbox->rx_start + msg->next_msgoff;
-	}
-	otx2_mbox_msg_send(mbox, 0);
-}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
index 75872d257eca..861025bb93c6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
@@ -555,9 +555,6 @@ const char *otx2_mbox_id2name(u16 id)
 	MBOX_UP_CGX_MESSAGES
 #undef M
 
-#define M(_name, _id, _1, _2, _3) case _id: return # _name;
-	MBOX_UP_CPT_MESSAGES
-#undef M
 	default:
 		return "INVALID ID";
 	}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index a2936a287b15..6e8e36548344 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -395,9 +395,6 @@ M(MCS_CUSTOM_TAG_CFG_GET, 0xa021, mcs_custom_tag_cfg_get,			\
 #define MBOX_UP_CGX_MESSAGES						\
 M(CGX_LINK_EVENT,	0xC00, cgx_link_event, cgx_link_info_msg, msg_rsp)
 
-#define MBOX_UP_CPT_MESSAGES						\
-M(CPT_INST_LMTST,	0xD00, cpt_inst_lmtst, cpt_inst_lmtst_req, msg_rsp)
-
 #define MBOX_UP_MCS_MESSAGES						\
 M(MCS_INTR_NOTIFY,	0xE00, mcs_intr_notify, mcs_intr_info, msg_rsp)
 
@@ -408,7 +405,6 @@ enum {
 #define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id,
 MBOX_MESSAGES
 MBOX_UP_CGX_MESSAGES
-MBOX_UP_CPT_MESSAGES
 MBOX_UP_MCS_MESSAGES
 MBOX_UP_REP_MESSAGES
 #undef M
@@ -1933,13 +1929,6 @@ struct cpt_rxc_time_cfg_req {
 	u16 active_limit;
 };
 
-/* Mailbox message request format to request for CPT_INST_S lmtst. */
-struct cpt_inst_lmtst_req {
-	struct mbox_msghdr hdr;
-	u64 inst[8];
-	u64 rsvd;
-};
-
 /* Mailbox message format to request for CPT LF reset */
 struct cpt_lf_rst_req {
 	struct mbox_msghdr hdr;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
index 84ca775b1871..edc3c356dba3 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
@@ -12,6 +12,7 @@
 #include "mbox.h"
 #include "rvu.h"
 #include "rvu_cpt.h"
+#include <linux/soc/marvell/octeontx2/asm.h>
 
 /* CPT PF device id */
 #define	PCI_DEVID_OTX2_CPT_PF	0xA0FD
@@ -26,6 +27,10 @@
 /* Default CPT_AF_RXC_CFG1:max_rxc_icb_cnt */
 #define CPT_DFLT_MAX_RXC_ICB_CNT  0xC0ULL
 
+/* CPT LMTST */
+#define LMT_LINE_SIZE   128 /* LMT line size in bytes */
+#define LMT_BURST_SIZE  32  /* 32 LMTST lines for burst */
+
 #define cpt_get_eng_sts(e_min, e_max, rsp, etype)                   \
 ({                                                                  \
 	u64 free_sts = 0, busy_sts = 0;                             \
@@ -699,10 +704,6 @@ int rvu_mbox_handler_cpt_inline_ipsec_cfg(struct rvu *rvu,
 		return CPT_AF_ERR_LF_INVALID;
 
 	switch (req->dir) {
-	case CPT_INLINE_INBOUND:
-		ret = cpt_inline_ipsec_cfg_inbound(rvu, blkaddr, cptlf, req);
-		break;
-
 	case CPT_INLINE_OUTBOUND:
 		ret = cpt_inline_ipsec_cfg_outbound(rvu, blkaddr, cptlf, req);
 		break;
@@ -1253,20 +1254,36 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s
 	return 0;
 }
 
+static void cn10k_cpt_inst_flush(struct rvu *rvu, u64 *inst, u64 size)
+{
+	u64 blkaddr = BLKADDR_CPT0;
+	u64 val = 0, tar_addr = 0;
+	void __iomem *io_addr;
+
+	io_addr	= rvu->pfreg_base + CPT_RVU_FUNC_ADDR_S(blkaddr, 0, CPT_LF_NQX);
+
+	/* Target address for LMTST flush tells HW how many 128bit
+	 * words are present.
+	 * tar_addr[6:4] size of first LMTST - 1 in units of 128b.
+	 */
+	tar_addr |= (__force u64)io_addr | (((size / 16) - 1) & 0x7) << 4;
+	dma_wmb();
+	memcpy((u64 *)rvu->rvu_cpt.lmt_addr, inst, size);
+	cn10k_lmt_flush(val, tar_addr);
+	dma_wmb();
+}
+
 #define CPT_RES_LEN    16
 #define CPT_SE_IE_EGRP 1ULL
 
 static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
 				      int nix_blkaddr)
 {
-	int cpt_pf_num = rvu->cpt_pf_num;
-	struct cpt_inst_lmtst_req *req;
 	dma_addr_t res_daddr;
 	int timeout = 3000;
+	u64 inst[8];
 	u8 cpt_idx;
-	u64 *inst;
 	u16 *res;
-	int rc;
 
 	res = kzalloc(CPT_RES_LEN, GFP_KERNEL);
 	if (!res)
@@ -1276,24 +1293,11 @@ static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
 				   DMA_BIDIRECTIONAL);
 	if (dma_mapping_error(rvu->dev, res_daddr)) {
 		dev_err(rvu->dev, "DMA mapping failed for CPT result\n");
-		rc = -EFAULT;
-		goto res_free;
+		kfree(res);
+		return -EFAULT;
 	}
 	*res = 0xFFFF;
 
-	/* Send mbox message to CPT PF */
-	req = (struct cpt_inst_lmtst_req *)
-	       otx2_mbox_alloc_msg_rsp(&rvu->afpf_wq_info.mbox_up,
-				       cpt_pf_num, sizeof(*req),
-				       sizeof(struct msg_rsp));
-	if (!req) {
-		rc = -ENOMEM;
-		goto res_daddr_unmap;
-	}
-	req->hdr.sig = OTX2_MBOX_REQ_SIG;
-	req->hdr.id = MBOX_MSG_CPT_INST_LMTST;
-
-	inst = req->inst;
 	/* Prepare CPT_INST_S */
 	inst[0] = 0;
 	inst[1] = res_daddr;
@@ -1314,11 +1318,8 @@ static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
 	rvu_write64(rvu, nix_blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx),
 		    BIT_ULL(22) - 1);
 
-	otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, cpt_pf_num);
-	rc = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, cpt_pf_num);
-	if (rc)
-		dev_warn(rvu->dev, "notification to pf %d failed\n",
-			 cpt_pf_num);
+	cn10k_cpt_inst_flush(rvu, inst, 64);
+
 	/* Wait for CPT instruction to be completed */
 	do {
 		mdelay(1);
@@ -1331,11 +1332,8 @@ static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
 	if (timeout == 0)
 		dev_warn(rvu->dev, "Poll for result hits hard loop counter\n");
 
-res_daddr_unmap:
 	dma_unmap_single(rvu->dev, res_daddr, CPT_RES_LEN, DMA_BIDIRECTIONAL);
-res_free:
 	kfree(res);
-
 	return 0;
 }
 
@@ -1381,23 +1379,16 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc)
 		goto unlock;
 	}
 
-	/* Enable BAR2 ALIAS for this pcifunc. */
-	reg = BIT_ULL(16) | pcifunc;
-	rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg);
-
 	for (i = 0; i < max_ctx_entries; i++) {
 		cam_data = rvu_read64(rvu, blkaddr, CPT_AF_CTX_CAM_DATA(i));
 
 		if ((FIELD_GET(CTX_CAM_PF_FUNC, cam_data) == pcifunc) &&
 		    FIELD_GET(CTX_CAM_CPTR, cam_data)) {
 			reg = BIT_ULL(46) | FIELD_GET(CTX_CAM_CPTR, cam_data);
-			rvu_write64(rvu, blkaddr,
-				    CPT_AF_BAR2_ALIASX(slot, CPT_LF_CTX_FLUSH),
-				    reg);
+			otx2_cpt_write64(rvu->pfreg_base, blkaddr, slot,
+					 CPT_LF_CTX_FLUSH, reg);
 		}
 	}
-	rvu_bar2_sel_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0);
-
 unlock:
 	mutex_unlock(&rvu->rsrc_lock);
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index d92c154b08cf..b24d9e7c8df4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -581,6 +581,7 @@
 #define CPT_LF_Q_SIZE                   0x100
 #define CPT_LF_Q_INST_PTR               0x110
 #define CPT_LF_Q_GRP_PTR                0x120
+#define CPT_LF_NQX                      0x400
 #define CPT_LF_CTX_FLUSH                0x510
 
 #define NPC_AF_BLK_RST                  (0x00040)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 05/14] octeontx2-af: Add support for CPT second pass
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (3 preceding siblings ...)
  2025-08-19  2:14 ` [PATCH net-next v4 04/14] octeontx2-af: Handle inbound inline ipsec config in AF Tanmay Jagdale
@ 2025-08-19  2:14 ` Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 06/14] octeontx2-af: Add support for SPI to SA index translation Tanmay Jagdale
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:14 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Rakesh Kudurumalla, Tanmay Jagdale

From: Rakesh Kudurumalla <rkudurumalla@marvell.com>

Implemented mailbox to add mechanism to allocate a
rq_mask and apply to nixlf to toggle RQ context fields
for CPT second pass packets.

Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- None

Changes in V3:
- Fixed uninitialized rq_mask variable

Changes in V2:
- None

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-6-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-6-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-6-tanmay@marvell.com/

 .../net/ethernet/marvell/octeontx2/af/mbox.h  |  23 ++++
 .../net/ethernet/marvell/octeontx2/af/rvu.h   |   7 +
 .../ethernet/marvell/octeontx2/af/rvu_cn10k.c |  11 ++
 .../ethernet/marvell/octeontx2/af/rvu_nix.c   | 125 ++++++++++++++++++
 .../ethernet/marvell/octeontx2/af/rvu_reg.h   |  15 +++
 .../marvell/octeontx2/af/rvu_struct.h         |   4 +-
 6 files changed, 184 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 6e8e36548344..92a000770668 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -332,6 +332,9 @@ M(NIX_CPT_BP_DISABLE,   0x8021, nix_cpt_bp_disable, nix_bp_cfg_req,	    \
 				msg_rsp)				\
 M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg,		\
 				msg_req, nix_inline_ipsec_cfg)		\
+M(NIX_LF_INLINE_RQ_CFG, 0x8024, nix_lf_inline_rq_cfg,		\
+				nix_rq_cpt_field_mask_cfg_req,  \
+				msg_rsp)	\
 M(NIX_MCAST_GRP_CREATE,	0x802b, nix_mcast_grp_create, nix_mcast_grp_create_req,	\
 				nix_mcast_grp_create_rsp)			\
 M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req,	\
@@ -875,6 +878,7 @@ enum nix_af_status {
 	NIX_AF_ERR_CQ_CTX_WRITE_ERR  = -429,
 	NIX_AF_ERR_AQ_CTX_RETRY_WRITE  = -430,
 	NIX_AF_ERR_LINK_CREDITS  = -431,
+	NIX_AF_ERR_RQ_CPT_MASK  = -432,
 	NIX_AF_ERR_INVALID_BPID         = -434,
 	NIX_AF_ERR_INVALID_BPID_REQ     = -435,
 	NIX_AF_ERR_INVALID_MCAST_GRP	= -436,
@@ -1196,6 +1200,25 @@ struct nix_mark_format_cfg_rsp {
 	u8 mark_format_idx;
 };
 
+struct nix_rq_cpt_field_mask_cfg_req {
+	struct mbox_msghdr hdr;
+#define RQ_CTX_MASK_MAX 6
+	union {
+		u64 rq_ctx_word_set[RQ_CTX_MASK_MAX];
+		struct nix_cn10k_rq_ctx_s rq_set;
+	};
+	union {
+		u64 rq_ctx_word_mask[RQ_CTX_MASK_MAX];
+		struct nix_cn10k_rq_ctx_s rq_mask;
+	};
+	struct nix_lf_rx_ipec_cfg1_req {
+		u32 spb_cpt_aura;
+		u8 rq_mask_enable;
+		u8 spb_cpt_sizem1;
+		u8 spb_cpt_enable;
+	} ipsec_cfg1;
+};
+
 struct nix_rx_mode {
 	struct mbox_msghdr hdr;
 #define NIX_RX_MODE_UCAST	BIT(0)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 1054a4ee19e0..39385c4fbb4b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -378,6 +378,11 @@ struct nix_lso {
 	u8 in_use;
 };
 
+struct nix_rq_cpt_mask {
+	u8 total;
+	u8 in_use;
+};
+
 struct nix_txvlan {
 #define NIX_TX_VTAG_DEF_MAX 0x400
 	struct rsrc_bmap rsrc;
@@ -401,6 +406,7 @@ struct nix_hw {
 	struct nix_flowkey flowkey;
 	struct nix_mark_format mark_format;
 	struct nix_lso lso;
+	struct nix_rq_cpt_mask rq_msk;
 	struct nix_txvlan txvlan;
 	struct nix_ipolicer *ipolicer;
 	struct nix_bp bp;
@@ -426,6 +432,7 @@ struct hw_cap {
 	bool	per_pf_mbox_regs; /* PF mbox specified in per PF registers ? */
 	bool	programmable_chans; /* Channels programmable ? */
 	bool	ipolicer;
+	bool	second_cpt_pass;
 	bool	nix_multiple_dwrr_mtu;   /* Multiple DWRR_MTU to choose from */
 	bool	npc_hash_extract; /* Hash extract enabled ? */
 	bool	npc_exact_match_enabled; /* Exact match supported ? */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
index d2163da28d18..0276622e276e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
@@ -558,6 +558,7 @@ void rvu_program_channels(struct rvu *rvu)
 
 void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw)
 {
+	struct rvu_hwinfo *hw = rvu->hw;
 	int blkaddr = nix_hw->blkaddr;
 	u64 cfg;
 
@@ -572,6 +573,16 @@ void rvu_nix_block_cn10k_init(struct rvu *rvu, struct nix_hw *nix_hw)
 	cfg = rvu_read64(rvu, blkaddr, NIX_AF_CFG);
 	cfg |= BIT_ULL(1) | BIT_ULL(2);
 	rvu_write64(rvu, blkaddr, NIX_AF_CFG, cfg);
+
+	cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
+
+	if (!(cfg & BIT_ULL(62))) {
+		hw->cap.second_cpt_pass = false;
+		return;
+	}
+
+	hw->cap.second_cpt_pass = true;
+	nix_hw->rq_msk.total = NIX_RQ_MSK_PROFILES;
 }
 
 void rvu_apr_block_cn10k_init(struct rvu *rvu)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 91af1ada11c2..17a4e885503d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -6616,3 +6616,128 @@ int rvu_mbox_handler_nix_mcast_grp_update(struct rvu *rvu,
 
 	return ret;
 }
+
+static inline void
+configure_rq_mask(struct rvu *rvu, int blkaddr, int nixlf,
+		  u8 rq_mask, bool enable)
+{
+	u64 cfg, reg;
+
+	cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf));
+	reg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_CFG(nixlf));
+	if (enable) {
+		cfg |= NIX_AF_LFX_RX_IPSEC_CFG1_RQ_MASK_ENA;
+		reg &= ~NIX_AF_LFX_CFG_RQ_CPT_MASK_SEL;
+		reg |= FIELD_PREP(NIX_AF_LFX_CFG_RQ_CPT_MASK_SEL, rq_mask);
+	} else {
+		cfg &= ~NIX_AF_LFX_RX_IPSEC_CFG1_RQ_MASK_ENA;
+		reg &= ~NIX_AF_LFX_CFG_RQ_CPT_MASK_SEL;
+	}
+	rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf), cfg);
+	rvu_write64(rvu, blkaddr, NIX_AF_LFX_CFG(nixlf), reg);
+}
+
+static inline void
+configure_spb_cpt(struct rvu *rvu, int blkaddr, int nixlf,
+		  struct nix_rq_cpt_field_mask_cfg_req *req, bool enable)
+{
+	u64 cfg;
+
+	cfg = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf));
+
+	/* Clear the SPB bit fields */
+	cfg &= ~NIX_AF_LFX_RX_IPSEC_CFG1_SPB_CPT_ENA;
+	cfg &= ~NIX_AF_LFX_RX_IPSEC_CFG1_SPB_CPT_SZM1;
+	cfg &= ~NIX_AF_LFX_RX_IPSEC_CFG1_SPB_AURA;
+
+	if (enable) {
+		cfg |= NIX_AF_LFX_RX_IPSEC_CFG1_SPB_CPT_ENA;
+		cfg |= FIELD_PREP(NIX_AF_LFX_RX_IPSEC_CFG1_SPB_CPT_SZM1,
+				  req->ipsec_cfg1.spb_cpt_sizem1);
+		cfg |= FIELD_PREP(NIX_AF_LFX_RX_IPSEC_CFG1_SPB_AURA,
+				  req->ipsec_cfg1.spb_cpt_aura);
+	}
+
+	rvu_write64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_CFG1(nixlf), cfg);
+}
+
+static
+int nix_inline_rq_mask_alloc(struct rvu *rvu,
+			     struct nix_rq_cpt_field_mask_cfg_req *req,
+			     struct nix_hw *nix_hw, int blkaddr)
+{
+	u8 rq_cpt_mask_select;
+	int idx, rq_idx;
+	u64 reg_mask;
+	u64 reg_set;
+
+	for (idx = 0; idx < nix_hw->rq_msk.in_use; idx++) {
+		for (rq_idx = 0; rq_idx < RQ_CTX_MASK_MAX; rq_idx++) {
+			reg_mask = rvu_read64(rvu, blkaddr,
+					      NIX_AF_RX_RQX_MASKX(idx, rq_idx));
+			reg_set  = rvu_read64(rvu, blkaddr,
+					      NIX_AF_RX_RQX_SETX(idx, rq_idx));
+			if (reg_mask != req->rq_ctx_word_mask[rq_idx] &&
+			    reg_set != req->rq_ctx_word_set[rq_idx])
+				break;
+		}
+		if (rq_idx == RQ_CTX_MASK_MAX)
+			break;
+	}
+
+	if (idx < nix_hw->rq_msk.in_use) {
+		/* Match found */
+		rq_cpt_mask_select = idx;
+		return idx;
+	}
+
+	if (nix_hw->rq_msk.in_use == nix_hw->rq_msk.total)
+		return NIX_AF_ERR_RQ_CPT_MASK;
+
+	rq_cpt_mask_select = nix_hw->rq_msk.in_use++;
+
+	for (rq_idx = 0; rq_idx < RQ_CTX_MASK_MAX; rq_idx++) {
+		rvu_write64(rvu, blkaddr,
+			    NIX_AF_RX_RQX_MASKX(rq_cpt_mask_select, rq_idx),
+			    req->rq_ctx_word_mask[rq_idx]);
+		rvu_write64(rvu, blkaddr,
+			    NIX_AF_RX_RQX_SETX(rq_cpt_mask_select, rq_idx),
+			    req->rq_ctx_word_set[rq_idx]);
+	}
+
+	return rq_cpt_mask_select;
+}
+
+int
+rvu_mbox_handler_nix_lf_inline_rq_cfg(struct rvu *rvu,
+				      struct nix_rq_cpt_field_mask_cfg_req *req,
+				      struct msg_rsp *rsp)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	struct nix_hw *nix_hw;
+	int rq_mask = 0, err;
+	int blkaddr, nixlf;
+
+	err = nix_get_nixlf(rvu, req->hdr.pcifunc, &nixlf, &blkaddr);
+	if (err)
+		return err;
+
+	nix_hw = get_nix_hw(rvu->hw, blkaddr);
+	if (!nix_hw)
+		return NIX_AF_ERR_INVALID_NIXBLK;
+
+	if (!hw->cap.second_cpt_pass)
+		return NIX_AF_ERR_INVALID_NIXBLK;
+
+	if (req->ipsec_cfg1.rq_mask_enable) {
+		rq_mask = nix_inline_rq_mask_alloc(rvu, req, nix_hw, blkaddr);
+		if (rq_mask < 0)
+			return NIX_AF_ERR_RQ_CPT_MASK;
+	}
+
+	configure_rq_mask(rvu, blkaddr, nixlf, rq_mask,
+			  req->ipsec_cfg1.rq_mask_enable);
+	configure_spb_cpt(rvu, blkaddr, nixlf, req,
+			  req->ipsec_cfg1.spb_cpt_enable);
+	return 0;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index b24d9e7c8df4..cb5972100058 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -433,6 +433,8 @@
 #define NIX_AF_MDQX_IN_MD_COUNT(a)	(0x14e0 | (a) << 16)
 #define NIX_AF_SMQX_STATUS(a)		(0x730 | (a) << 16)
 #define NIX_AF_MDQX_OUT_MD_COUNT(a)	(0xdb0 | (a) << 16)
+#define NIX_AF_RX_RQX_MASKX(a, b)       (0x4A40 | (a) << 16 | (b) << 3)
+#define NIX_AF_RX_RQX_SETX(a, b)        (0x4A80 | (a) << 16 | (b) << 3)
 
 #define NIX_PRIV_AF_INT_CFG		(0x8000000)
 #define NIX_PRIV_LFX_CFG		(0x8000010)
@@ -452,6 +454,19 @@
 #define NIX_AF_TL3_PARENT_MASK         GENMASK_ULL(23, 16)
 #define NIX_AF_TL2_PARENT_MASK         GENMASK_ULL(20, 16)
 
+#define NIX_AF_LFX_CFG_RQ_CPT_MASK_SEL	GENMASK_ULL(36, 35)
+
+#define NIX_AF_LFX_RX_IPSEC_CFG1_SPB_AURA	GENMASK_ULL(63, 44)
+#define NIX_AF_LFX_RX_IPSEC_CFG1_RQ_MASK_ENA	BIT_ULL(43)
+#define NIX_AF_LFX_RX_IPSEC_CFG1_SPB_CPT_SZM1	GENMASK_ULL(42, 38)
+#define NIX_AF_LFX_RX_IPSEC_CFG1_SPB_CPT_ENA	BIT_ULL(37)
+#define NIX_AF_LFX_RX_IPSEC_CFG1_SA_IDX_WIDTH	GENMASK_ULL(36, 32)
+#define NIX_AF_LFX_RX_IPSEC_CFG1_SA_IDX_MAX	GENMASK_ULL(31, 0)
+
+#define NIX_AF_LF_CFG_SHIFT		17
+#define NIX_AF_LF_SSO_PF_FUNC_SHIFT	16
+#define NIX_RQ_MSK_PROFILES             4
+
 /* SSO */
 #define SSO_AF_CONST			(0x1000)
 #define SSO_AF_CONST1			(0x1008)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
index 0596a3ac4c12..a1bcb51d049c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
@@ -379,7 +379,9 @@ struct nix_cn10k_rq_ctx_s {
 	u64 ipsech_ena		: 1;
 	u64 ena_wqwd		: 1;
 	u64 cq			: 20;
-	u64 rsvd_36_24		: 13;
+	u64 rsvd_34_24          : 11;
+	u64 port_ol4_dis        : 1;
+	u64 port_il4_dis        : 1;
 	u64 lenerr_dis		: 1;
 	u64 csum_il4_dis	: 1;
 	u64 csum_ol4_dis	: 1;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 06/14] octeontx2-af: Add support for SPI to SA index translation
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (4 preceding siblings ...)
  2025-08-19  2:14 ` [PATCH net-next v4 05/14] octeontx2-af: Add support for CPT second pass Tanmay Jagdale
@ 2025-08-19  2:14 ` Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 07/14] octeontx2-af: Add mbox to alloc/free BPIDs Tanmay Jagdale
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:14 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Kiran Kumar K, Nithin Dabilpuram,
	Tanmay Jagdale

From: Kiran Kumar K <kirankumark@marvell.com>

In case of IPsec, the inbound SPI can be random. HW supports mapping
SPI to an arbitrary SA index. SPI to SA index is done using a lookup
in NPC cam entry with key as SPI, MATCH_ID, LFID. Adding Mbox API
changes to configure the match table.

Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- None

Changes in V3:
- Dropped an extra line which was added accidently

Changes in V2:
- RCT order definition
- Fixed 80 character limit line warning
- Used GENMASK and FIELD_PREP macros
- Removed unused gotos

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-8-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-7-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-7-tanmay@marvell.com/

 .../ethernet/marvell/octeontx2/af/Makefile    |   2 +-
 .../net/ethernet/marvell/octeontx2/af/mbox.h  |  27 +++
 .../net/ethernet/marvell/octeontx2/af/rvu.c   |   3 +
 .../net/ethernet/marvell/octeontx2/af/rvu.h   |  13 ++
 .../ethernet/marvell/octeontx2/af/rvu_nix.c   |   6 +
 .../marvell/octeontx2/af/rvu_nix_spi.c        | 211 ++++++++++++++++++
 .../ethernet/marvell/octeontx2/af/rvu_reg.h   |   9 +
 7 files changed, 270 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
index 532813d8d028..366f6537a1ee 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile
+++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_OCTEONTX2_MBOX) += rvu_mbox.o
 obj-$(CONFIG_OCTEONTX2_AF) += rvu_af.o
 
 rvu_mbox-y := mbox.o rvu_trace.o
-rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \
+rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o rvu_nix_spi.o \
 		  rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \
 		  rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o rvu_switch.o \
 		  rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o \
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 92a000770668..6b2f46f32cfd 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -335,6 +335,10 @@ M(NIX_READ_INLINE_IPSEC_CFG, 0x8023, nix_read_inline_ipsec_cfg,		\
 M(NIX_LF_INLINE_RQ_CFG, 0x8024, nix_lf_inline_rq_cfg,		\
 				nix_rq_cpt_field_mask_cfg_req,  \
 				msg_rsp)	\
+M(NIX_SPI_TO_SA_ADD,    0x8026, nix_spi_to_sa_add, nix_spi_to_sa_add_req,	\
+				nix_spi_to_sa_add_rsp)				\
+M(NIX_SPI_TO_SA_DELETE, 0x8027, nix_spi_to_sa_delete, nix_spi_to_sa_delete_req,\
+				msg_rsp)					\
 M(NIX_MCAST_GRP_CREATE,	0x802b, nix_mcast_grp_create, nix_mcast_grp_create_req,	\
 				nix_mcast_grp_create_rsp)			\
 M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_req,	\
@@ -898,6 +902,29 @@ enum nix_rx_vtag0_type {
 	NIX_AF_LFX_RX_VTAG_TYPE7,
 };
 
+/* For SPI to SA index add */
+struct nix_spi_to_sa_add_req {
+	struct mbox_msghdr hdr;
+	u32 sa_index;
+	u32 spi_index;
+	u16 match_id;
+	bool valid;
+};
+
+struct nix_spi_to_sa_add_rsp {
+	struct mbox_msghdr hdr;
+	u16 hash_index;
+	u8 way;
+	u8 is_duplicate;
+};
+
+/* To free SPI to SA index */
+struct nix_spi_to_sa_delete_req {
+	struct mbox_msghdr hdr;
+	u16 hash_index;
+	u8 way;
+};
+
 /* For NIX LF context alloc and init */
 struct nix_lf_alloc_req {
 	struct mbox_msghdr hdr;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 250d9e34b91e..67780d8e95ab 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -90,6 +90,9 @@ static void rvu_setup_hw_capabilities(struct rvu *rvu)
 
 	if (is_rvu_npc_hash_extract_en(rvu))
 		hw->cap.npc_hash_extract = true;
+
+	if (is_rvu_nix_spi_to_sa_en(rvu))
+		hw->cap.spi_to_sas = 0x2000;
 }
 
 /* Poll a RVU block's register 'offset', for a 'zero'
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 39385c4fbb4b..04cfaea267dc 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -423,6 +423,7 @@ struct hw_cap {
 	u16	nix_txsch_per_cgx_lmac; /* Max Q's transmitting to CGX LMAC */
 	u16	nix_txsch_per_lbk_lmac; /* Max Q's transmitting to LBK LMAC */
 	u16	nix_txsch_per_sdp_lmac; /* Max Q's transmitting to SDP LMAC */
+	u16     spi_to_sas;		/* Num of SPI to SA index */
 	bool	nix_fixed_txschq_mapping; /* Schq mapping fixed or flexible */
 	bool	nix_shaping;		 /* Is shaping and coloring supported */
 	bool    nix_shaper_toggle_wait; /* Shaping toggle needs poll/wait */
@@ -847,6 +848,17 @@ static inline bool is_rvu_npc_hash_extract_en(struct rvu *rvu)
 	return true;
 }
 
+static inline bool is_rvu_nix_spi_to_sa_en(struct rvu *rvu)
+{
+	u64 nix_const2;
+
+	nix_const2 = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST2);
+	if ((nix_const2 >> 48) & 0xffff)
+		return true;
+
+	return false;
+}
+
 static inline u16 rvu_nix_chan_cgx(struct rvu *rvu, u8 cgxid,
 				   u8 lmacid, u8 chan)
 {
@@ -1052,6 +1064,7 @@ int nix_get_struct_ptrs(struct rvu *rvu, u16 pcifunc,
 			struct nix_hw **nix_hw, int *blkaddr);
 int rvu_nix_setup_ratelimit_aggr(struct rvu *rvu, u16 pcifunc,
 				 u16 rq_idx, u16 match_id);
+int rvu_nix_free_spi_to_sa_table(struct rvu *rvu, uint16_t pcifunc);
 int nix_aq_context_read(struct rvu *rvu, struct nix_hw *nix_hw,
 			struct nix_cn10k_aq_enq_req *aq_req,
 			struct nix_cn10k_aq_enq_rsp *aq_rsp,
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 17a4e885503d..fa21e440eb11 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -1752,6 +1752,9 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req,
 	else
 		rvu_npc_free_mcam_entries(rvu, pcifunc, nixlf);
 
+	/* Reset SPI to SA index table */
+	rvu_nix_free_spi_to_sa_table(rvu, pcifunc);
+
 	/* Free any tx vtag def entries used by this NIX LF */
 	if (!(req->flags & NIX_LF_DONT_FREE_TX_VTAG))
 		nix_free_tx_vtag_entries(rvu, pcifunc);
@@ -5316,6 +5319,9 @@ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf)
 	nix_rx_sync(rvu, blkaddr);
 	nix_txschq_free(rvu, pcifunc);
 
+	/* Reset SPI to SA index table */
+	rvu_nix_free_spi_to_sa_table(rvu, pcifunc);
+
 	clear_bit(NIXLF_INITIALIZED, &pfvf->flags);
 
 	if (is_pf_cgxmapped(rvu, pf) && rvu->rep_mode)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
new file mode 100644
index 000000000000..6f048d44a41e
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix_spi.c
@@ -0,0 +1,211 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell RVU Admin Function driver
+ *
+ * Copyright (C) 2025 Marvell.
+ *
+ */
+
+#include "rvu.h"
+
+static bool
+nix_spi_to_sa_index_check_duplicate(struct rvu *rvu,
+				    struct nix_spi_to_sa_add_req *req,
+				    struct nix_spi_to_sa_add_rsp *rsp,
+				    int blkaddr, int16_t index, u8 way,
+				    bool *is_valid, int lfidx)
+{
+	u32 spi_index;
+	u16 match_id;
+	bool valid;
+	u64 wkey;
+	u8 lfid;
+
+	wkey = rvu_read64(rvu, blkaddr, NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way));
+
+	spi_index = FIELD_GET(NIX_AF_SPI_TO_SA_SPI_INDEX_MASK, wkey);
+	match_id = FIELD_GET(NIX_AF_SPI_TO_SA_MATCH_ID_MASK, wkey);
+	lfid = FIELD_GET(NIX_AF_SPI_TO_SA_LFID_MASK, wkey);
+	valid = FIELD_GET(NIX_AF_SPI_TO_SA_KEYX_WAYX_VALID, wkey);
+
+	*is_valid = valid;
+	if (!valid)
+		return 0;
+
+	if (req->spi_index == spi_index && req->match_id == match_id &&
+	    lfidx == lfid) {
+		rsp->hash_index = index;
+		rsp->way = way;
+		rsp->is_duplicate = true;
+		return 1;
+	}
+	return 0;
+}
+
+static void  nix_spi_to_sa_index_table_update(struct rvu *rvu,
+					      struct nix_spi_to_sa_add_req *req,
+					      struct nix_spi_to_sa_add_rsp *rsp,
+					      int blkaddr, int16_t index,
+					      u8 way, int lfidx)
+{
+	u64 wvalue;
+	u64 wkey;
+
+	wkey = FIELD_PREP(NIX_AF_SPI_TO_SA_SPI_INDEX_MASK, req->spi_index);
+	wkey |= FIELD_PREP(NIX_AF_SPI_TO_SA_MATCH_ID_MASK, req->match_id);
+	wkey |= FIELD_PREP(NIX_AF_SPI_TO_SA_LFID_MASK, lfidx);
+	wkey |= FIELD_PREP(NIX_AF_SPI_TO_SA_KEYX_WAYX_VALID, req->valid);
+
+	rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way),
+		    wkey);
+	wvalue = (req->sa_index & 0xFFFFFFFF);
+	rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_VALUEX_WAYX(index, way),
+		    wvalue);
+	rsp->hash_index = index;
+	rsp->way = way;
+	rsp->is_duplicate = false;
+}
+
+int rvu_mbox_handler_nix_spi_to_sa_delete(struct rvu *rvu,
+					  struct nix_spi_to_sa_delete_req *req,
+					  struct msg_rsp *rsp)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	u16 pcifunc = req->hdr.pcifunc;
+	int lfidx, lfid, blkaddr;
+	int ret = 0;
+	u64 wkey;
+
+	if (!hw->cap.spi_to_sas)
+		return NIX_AF_ERR_PARAM;
+
+	if (!is_nixlf_attached(rvu, pcifunc))
+		return NIX_AF_ERR_AF_LF_INVALID;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+	lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
+	if (lfidx < 0)
+		return NIX_AF_ERR_AF_LF_INVALID;
+
+	mutex_lock(&rvu->rsrc_lock);
+
+	wkey = rvu_read64(rvu, blkaddr,
+			  NIX_AF_SPI_TO_SA_KEYX_WAYX(req->hash_index, req->way));
+	lfid = FIELD_GET(NIX_AF_SPI_TO_SA_LFID_MASK, wkey);
+	if (lfid != lfidx) {
+		ret = NIX_AF_ERR_AF_LF_INVALID;
+		goto unlock;
+	}
+
+	rvu_write64(rvu, blkaddr,
+		    NIX_AF_SPI_TO_SA_KEYX_WAYX(req->hash_index, req->way), 0);
+	rvu_write64(rvu, blkaddr,
+		    NIX_AF_SPI_TO_SA_VALUEX_WAYX(req->hash_index, req->way), 0);
+unlock:
+	mutex_unlock(&rvu->rsrc_lock);
+	return ret;
+}
+
+int rvu_mbox_handler_nix_spi_to_sa_add(struct rvu *rvu,
+				       struct nix_spi_to_sa_add_req *req,
+				       struct nix_spi_to_sa_add_rsp *rsp)
+{
+	u16 way0_index, way1_index, way2_index, way3_index;
+	struct rvu_hwinfo *hw = rvu->hw;
+	u16 pcifunc = req->hdr.pcifunc;
+	bool way0, way1, way2, way3;
+	int ret = 0;
+	int blkaddr;
+	int lfidx;
+	u64 value;
+	u64 key;
+
+	if (!hw->cap.spi_to_sas)
+		return NIX_AF_ERR_PARAM;
+
+	if (!is_nixlf_attached(rvu, pcifunc))
+		return NIX_AF_ERR_AF_LF_INVALID;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+	lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
+	if (lfidx < 0)
+		return NIX_AF_ERR_AF_LF_INVALID;
+
+	mutex_lock(&rvu->rsrc_lock);
+
+	key = FIELD_PREP(NIX_AF_SPI_TO_SA_LFID_MASK, lfidx);
+	key |= FIELD_PREP(NIX_AF_SPI_TO_SA_MATCH_ID_MASK, req->match_id);
+	key |= FIELD_PREP(NIX_AF_SPI_TO_SA_SPI_INDEX_MASK, req->spi_index);
+	rvu_write64(rvu, blkaddr, NIX_AF_SPI_TO_SA_HASH_KEY, key);
+	value = rvu_read64(rvu, blkaddr, NIX_AF_SPI_TO_SA_HASH_VALUE);
+	way0_index = (value & 0x7ff);
+	way1_index = ((value >> 16) & 0x7ff);
+	way2_index = ((value >> 32) & 0x7ff);
+	way3_index = ((value >> 48) & 0x7ff);
+
+	/* Check for duplicate entry */
+	if (nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
+						way0_index, 0, &way0, lfidx) ||
+	    nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
+						way1_index, 1, &way1, lfidx) ||
+	    nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
+						way2_index, 2, &way2, lfidx) ||
+	    nix_spi_to_sa_index_check_duplicate(rvu, req, rsp, blkaddr,
+						way3_index, 3, &way3, lfidx)) {
+		ret = 0;
+		goto unlock;
+	}
+
+	/* If not present, update first available way with index */
+	if (!way0)
+		nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
+						 way0_index, 0, lfidx);
+	else if (!way1)
+		nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
+						 way1_index, 1, lfidx);
+	else if (!way2)
+		nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
+						 way2_index, 2, lfidx);
+	else if (!way3)
+		nix_spi_to_sa_index_table_update(rvu, req, rsp, blkaddr,
+						 way3_index, 3, lfidx);
+unlock:
+	mutex_unlock(&rvu->rsrc_lock);
+	return ret;
+}
+
+int rvu_nix_free_spi_to_sa_table(struct rvu *rvu, uint16_t pcifunc)
+{
+	struct rvu_hwinfo *hw = rvu->hw;
+	int lfidx, lfid;
+	int index, way;
+	int blkaddr;
+	u64 key;
+
+	if (!hw->cap.spi_to_sas)
+		return 0;
+
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc);
+	lfidx = rvu_get_lf(rvu, &hw->block[blkaddr], pcifunc, 0);
+	if (lfidx < 0)
+		return NIX_AF_ERR_AF_LF_INVALID;
+
+	mutex_lock(&rvu->rsrc_lock);
+	for (index = 0; index < hw->cap.spi_to_sas / 4; index++) {
+		for (way = 0; way < 4; way++) {
+			key = rvu_read64(rvu, blkaddr,
+					 NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way));
+			lfid = FIELD_GET(NIX_AF_SPI_TO_SA_LFID_MASK, key);
+			if (lfid == lfidx) {
+				rvu_write64(rvu, blkaddr,
+					    NIX_AF_SPI_TO_SA_KEYX_WAYX(index, way),
+					    0);
+				rvu_write64(rvu, blkaddr,
+					    NIX_AF_SPI_TO_SA_VALUEX_WAYX(index, way),
+					    0);
+			}
+		}
+	}
+	mutex_unlock(&rvu->rsrc_lock);
+
+	return 0;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index cb5972100058..fcb02846f365 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -396,6 +396,10 @@
 #define NIX_AF_RX_CHANX_CFG(a)                  (0x1A30 | (a) << 15)
 #define NIX_AF_CINT_TIMERX(a)                   (0x1A40 | (a) << 18)
 #define NIX_AF_LSO_FORMATX_FIELDX(a, b)         (0x1B00 | (a) << 16 | (b) << 3)
+#define NIX_AF_SPI_TO_SA_KEYX_WAYX(a, b)        (0x1C00 | (a) << 16 | (b) << 3)
+#define NIX_AF_SPI_TO_SA_VALUEX_WAYX(a, b)      (0x1C40 | (a) << 16 | (b) << 3)
+#define NIX_AF_SPI_TO_SA_HASH_KEY               (0x1C90)
+#define NIX_AF_SPI_TO_SA_HASH_VALUE             (0x1CA0)
 #define NIX_AF_LFX_CFG(a)		(0x4000 | (a) << 17)
 #define NIX_AF_LFX_SQS_CFG(a)		(0x4020 | (a) << 17)
 #define NIX_AF_LFX_TX_CFG2(a)		(0x4028 | (a) << 17)
@@ -463,6 +467,11 @@
 #define NIX_AF_LFX_RX_IPSEC_CFG1_SA_IDX_WIDTH	GENMASK_ULL(36, 32)
 #define NIX_AF_LFX_RX_IPSEC_CFG1_SA_IDX_MAX	GENMASK_ULL(31, 0)
 
+#define NIX_AF_SPI_TO_SA_KEYX_WAYX_VALID	BIT_ULL(55)
+#define NIX_AF_SPI_TO_SA_LFID_MASK		GENMASK_ULL(54, 48)
+#define NIX_AF_SPI_TO_SA_MATCH_ID_MASK		GENMASK_ULL(47, 32)
+#define NIX_AF_SPI_TO_SA_SPI_INDEX_MASK		GENMASK_ULL(31, 0)
+
 #define NIX_AF_LF_CFG_SHIFT		17
 #define NIX_AF_LF_SSO_PF_FUNC_SHIFT	16
 #define NIX_RQ_MSK_PROFILES             4
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 07/14] octeontx2-af: Add mbox to alloc/free BPIDs
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (5 preceding siblings ...)
  2025-08-19  2:14 ` [PATCH net-next v4 06/14] octeontx2-af: Add support for SPI to SA index translation Tanmay Jagdale
@ 2025-08-19  2:14 ` Tanmay Jagdale
  2025-08-19  2:14 ` [PATCH net-next v4 08/14] octeontx2-pf: ipsec: Allocate Ingress SA table Tanmay Jagdale
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:14 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Geetha sowjanya, Amit Singh Tomar,
	Tanmay Jagdale

From: Geetha sowjanya <gakula@marvell.com>

Adds mbox handlers to allocate/free BPIDs from the free BPIDs pool.
This can be used by the PF/VF to request up to 8 BPIds.
Also add a mbox handler to configure NIXX_AF_RX_CHANX with multiple
Bpids.

Signed-off-by: Amit Singh Tomar <amitsinght@marvell.com>
Signed-off-by: Geetha sowjanya <gakula@marvell.com>
Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- None

Changes in V3:
- None

Changes in V2:
- None

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-9-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-8-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-8-tanmay@marvell.com/

 .../ethernet/marvell/octeontx2/af/common.h    |   1 +
 .../net/ethernet/marvell/octeontx2/af/mbox.h  |  26 +++++
 .../ethernet/marvell/octeontx2/af/rvu_nix.c   | 100 ++++++++++++++++++
 3 files changed, 127 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
index 8a08bebf08c2..656f6e5c8524 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
@@ -191,6 +191,7 @@ enum nix_scheduler {
 #define NIX_INTF_TYPE_CGX		0
 #define NIX_INTF_TYPE_LBK		1
 #define NIX_INTF_TYPE_SDP		2
+#define NIX_INTF_TYPE_CPT		3
 
 #define MAX_LMAC_PKIND			12
 #define NIX_LINK_CGX_LMAC(a, b)		(0 + 4 * (a) + (b))
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 6b2f46f32cfd..709b4eb3de59 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -347,6 +347,9 @@ M(NIX_MCAST_GRP_UPDATE, 0x802d, nix_mcast_grp_update,				\
 				nix_mcast_grp_update_req,			\
 				nix_mcast_grp_update_rsp)			\
 M(NIX_LF_STATS, 0x802e, nix_lf_stats, nix_stats_req, nix_stats_rsp)	\
+M(NIX_ALLOC_BPIDS,     0x8028, nix_alloc_bpids, nix_alloc_bpid_req, nix_bpids) \
+M(NIX_FREE_BPIDS,      0x8029, nix_free_bpids, nix_bpids, msg_rsp)             \
+M(NIX_RX_CHAN_CFG,     0x802a, nix_rx_chan_cfg, nix_rx_chan_cfg, nix_rx_chan_cfg)      \
 /* MCS mbox IDs (range 0xA000 - 0xBFFF) */					\
 M(MCS_ALLOC_RESOURCES,	0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req,	\
 				mcs_alloc_rsrc_rsp)				\
@@ -1365,6 +1368,29 @@ struct nix_mcast_grp_update_rsp {
 	u32 mce_start_index;
 };
 
+struct nix_alloc_bpid_req {
+	struct mbox_msghdr hdr;
+	u8 bpid_cnt;
+	u8 type;
+	u64 rsvd;
+};
+
+struct nix_bpids {
+	struct mbox_msghdr hdr;
+	u8 bpid_cnt;
+	u16 bpids[8];
+	u64 rsvd;
+};
+
+struct nix_rx_chan_cfg {
+	struct mbox_msghdr hdr;
+	u8 type;	/* Interface type(CGX/CPT/LBK) */
+	u8 read;
+	u16 chan;	/* RX channel to be configured */
+	u64 val;	/* NIX_AF_RX_CHAN_CFG value */
+	u64 rsvd;
+};
+
 /* Global NIX inline IPSec configuration */
 struct nix_inline_ipsec_cfg {
 	struct mbox_msghdr hdr;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index fa21e440eb11..f4095b105417 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -570,6 +570,106 @@ void rvu_nix_flr_free_bpids(struct rvu *rvu, u16 pcifunc)
 	mutex_unlock(&rvu->rsrc_lock);
 }
 
+int rvu_mbox_handler_nix_rx_chan_cfg(struct rvu *rvu,
+				     struct nix_rx_chan_cfg *req,
+				     struct nix_rx_chan_cfg *rsp)
+{
+	struct rvu_pfvf *pfvf;
+	int blkaddr;
+	u16 chan;
+
+	pfvf = rvu_get_pfvf(rvu, req->hdr.pcifunc);
+	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, req->hdr.pcifunc);
+	chan = pfvf->rx_chan_base + req->chan;
+
+	if (req->type == NIX_INTF_TYPE_CPT)
+		chan = chan | BIT(11);
+
+	if (req->read) {
+		rsp->val = rvu_read64(rvu, blkaddr,
+				      NIX_AF_RX_CHANX_CFG(chan));
+		rsp->chan = req->chan;
+	} else {
+		rvu_write64(rvu, blkaddr, NIX_AF_RX_CHANX_CFG(chan), req->val);
+	}
+	return 0;
+}
+
+int rvu_mbox_handler_nix_alloc_bpids(struct rvu *rvu,
+				     struct nix_alloc_bpid_req *req,
+				     struct nix_bpids *rsp)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	struct nix_hw *nix_hw;
+	int blkaddr, cnt = 0;
+	struct nix_bp *bp;
+	int bpid, err;
+
+	err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr);
+	if (err)
+		return err;
+
+	bp = &nix_hw->bp;
+
+	/* For interface like sso uses same bpid across multiple
+	 * application. Find the bpid is it already allocate or
+	 * allocate a new one.
+	 */
+	if (req->type > NIX_INTF_TYPE_CPT) {
+		for (bpid = 0; bpid < bp->bpids.max; bpid++) {
+			if (bp->intf_map[bpid] == req->type) {
+				rsp->bpids[cnt] = bpid + bp->free_pool_base;
+				rsp->bpid_cnt++;
+				bp->ref_cnt[bpid]++;
+				cnt++;
+			}
+		}
+		if (rsp->bpid_cnt)
+			return 0;
+	}
+
+	for (cnt = 0; cnt < req->bpid_cnt; cnt++) {
+		bpid = rvu_alloc_rsrc(&bp->bpids);
+		if (bpid < 0)
+			return 0;
+		rsp->bpids[cnt] = bpid + bp->free_pool_base;
+		bp->intf_map[bpid] = req->type;
+		bp->fn_map[bpid] = pcifunc;
+		bp->ref_cnt[bpid]++;
+		rsp->bpid_cnt++;
+	}
+	return 0;
+}
+
+int rvu_mbox_handler_nix_free_bpids(struct rvu *rvu,
+				    struct nix_bpids *req,
+				    struct msg_rsp *rsp)
+{
+	u16 pcifunc = req->hdr.pcifunc;
+	int blkaddr, cnt, err, id;
+	struct nix_hw *nix_hw;
+	struct nix_bp *bp;
+	u16 bpid;
+
+	err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr);
+	if (err)
+		return err;
+
+	bp = &nix_hw->bp;
+	for (cnt = 0; cnt < req->bpid_cnt; cnt++) {
+		bpid = req->bpids[cnt] - bp->free_pool_base;
+		bp->ref_cnt[bpid]--;
+		if (bp->ref_cnt[bpid])
+			continue;
+		rvu_free_rsrc(&bp->bpids, bpid);
+		for (id = 0; id < bp->bpids.max; id++) {
+			if (bp->fn_map[id] == pcifunc)
+				bp->fn_map[id] = 0;
+		}
+	}
+	return 0;
+}
+
 static u16 nix_get_channel(u16 chan, bool cpt_link)
 {
 	/* CPT channel for a given link channel is always
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 08/14] octeontx2-pf: ipsec: Allocate Ingress SA table
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (6 preceding siblings ...)
  2025-08-19  2:14 ` [PATCH net-next v4 07/14] octeontx2-af: Add mbox to alloc/free BPIDs Tanmay Jagdale
@ 2025-08-19  2:14 ` Tanmay Jagdale
  2025-08-19  2:15 ` [PATCH net-next v4 09/14] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows Tanmay Jagdale
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:14 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

Every NIX LF has the facility to maintain a contiguous SA table that
is used by NIX RX to find the exact SA context pointer associated with
a particular flow. Allocate a 128-entry SA table where each entry is of
2048 bytes which is enough to hold the complete inbound SA context.

Add the structure definitions for SA context (cn10k_rx_sa_s) and
SA bookkeeping information (ctx_inb_ctx_info).

Also, initialize the inb_sw_ctx_list to track all the SA's and their
associated NPC rules and hash table related data.

Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- None

Changes in V3:
- None

Changes in V2:
- None

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-10-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-9-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-9-tanmay@marvell.com/

 .../marvell/octeontx2/nic/cn10k_ipsec.c       | 20 ++++
 .../marvell/octeontx2/nic/cn10k_ipsec.h       | 93 +++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index c691f0722154..ae2aa0b73e96 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -787,6 +787,7 @@ int cn10k_ipsec_init(struct net_device *netdev)
 {
 	struct otx2_nic *pf = netdev_priv(netdev);
 	u32 sa_size;
+	int err;
 
 	if (!is_dev_support_ipsec_offload(pf->pdev))
 		return 0;
@@ -797,6 +798,22 @@ int cn10k_ipsec_init(struct net_device *netdev)
 			 OTX2_ALIGN : sizeof(struct cn10k_tx_sa_s);
 	pf->ipsec.sa_size = sa_size;
 
+	/* Set sa_tbl_entry_sz to 2048 since we are programming NIX RX
+	 * to calculate SA index as SPI * 2048. The first 1024 bytes
+	 * are used for SA context and  the next half for bookkeeping data.
+	 */
+	pf->ipsec.sa_tbl_entry_sz = 2048;
+	err = qmem_alloc(pf->dev, &pf->ipsec.inb_sa, CN10K_IPSEC_INB_MAX_SA,
+			 pf->ipsec.sa_tbl_entry_sz);
+	if (err)
+		return err;
+
+	memset(pf->ipsec.inb_sa->base, 0,
+	       pf->ipsec.sa_tbl_entry_sz * CN10K_IPSEC_INB_MAX_SA);
+
+	/* List to track all ingress SAs */
+	INIT_LIST_HEAD(&pf->ipsec.inb_sw_ctx_list);
+
 	INIT_WORK(&pf->ipsec.sa_work, cn10k_ipsec_sa_wq_handler);
 	pf->ipsec.sa_workq = alloc_workqueue("cn10k_ipsec_sa_workq", 0, 0);
 	if (!pf->ipsec.sa_workq) {
@@ -828,6 +845,9 @@ void cn10k_ipsec_clean(struct otx2_nic *pf)
 	}
 
 	cn10k_outb_cpt_clean(pf);
+
+	/* Free Ingress SA table */
+	qmem_free(pf->dev, pf->ipsec.inb_sa);
 }
 EXPORT_SYMBOL(cn10k_ipsec_clean);
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 43fbce0d6039..7ffbbedf26d5 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -52,10 +52,14 @@ DECLARE_STATIC_KEY_FALSE(cn10k_ipsec_sa_enabled);
 #define CN10K_CPT_LF_NQX(a)		(CPT_LFBASE | 0x400 | (a) << 3)
 #define CN10K_CPT_LF_CTX_FLUSH		(CPT_LFBASE | 0x510)
 
+/* Inbound SA*/
+#define CN10K_IPSEC_INB_MAX_SA	128
+
 /* IPSEC Instruction opcodes */
 #define CN10K_IPSEC_MAJOR_OP_WRITE_SA 0x01UL
 #define CN10K_IPSEC_MINOR_OP_WRITE_SA 0x09UL
 #define CN10K_IPSEC_MAJOR_OP_OUTB_IPSEC 0x2AUL
+#define CN10K_IPSEC_MAJOR_OP_INB_IPSEC 0x29UL
 
 enum cn10k_cpt_comp_e {
 	CN10K_CPT_COMP_E_NOTDONE = 0x00,
@@ -81,6 +85,19 @@ enum cn10k_cpt_hw_state_e {
 	CN10K_CPT_HW_IN_USE
 };
 
+struct cn10k_inb_sw_ctx_info {
+	struct list_head list;
+	struct cn10k_rx_sa_s *sa_entry;
+	struct xfrm_state *x_state;
+	dma_addr_t sa_iova;
+	u32 npc_mcam_entry;
+	u32 sa_index;
+	__be32 spi;
+	u16 hash_index;	/* Hash index from SPI_TO_SA match */
+	u8 way;		/* SPI_TO_SA match table way index */
+	bool delete_npc_and_match_entry;
+};
+
 struct cn10k_ipsec {
 	/* Outbound CPT */
 	u64 io_addr;
@@ -92,6 +109,12 @@ struct cn10k_ipsec {
 	u32 outb_sa_count;
 	struct work_struct sa_work;
 	struct workqueue_struct *sa_workq;
+
+	/* For Inbound Inline IPSec flows */
+	u32 sa_tbl_entry_sz;
+	struct qmem *inb_sa;
+	struct list_head inb_sw_ctx_list;
+	DECLARE_BITMAP(inb_sa_table, CN10K_IPSEC_INB_MAX_SA);
 };
 
 /* CN10K IPSEC Security Association (SA) */
@@ -146,6 +169,76 @@ struct cn10k_tx_sa_s {
 	u64 hw_ctx[6];		/* W31 - W36 */
 };
 
+struct cn10k_rx_sa_s {
+	u64 inb_ar_win_sz	: 3; /* W0 */
+	u64 hard_life_dec	: 1;
+	u64 soft_life_dec	: 1;
+	u64 count_glb_octets	: 1;
+	u64 count_glb_pkts	: 1;
+	u64 count_mib_bytes	: 1;
+	u64 count_mib_pkts	: 1;
+	u64 hw_ctx_off		: 7;
+	u64 ctx_id		: 16;
+	u64 orig_pkt_fabs	: 1;
+	u64 orig_pkt_free	: 1;
+	u64 pkind		: 6;
+	u64 rsvd_w0_40		: 1;
+	u64 eth_ovrwr		: 1;
+	u64 pkt_output		: 2;
+	u64 pkt_format		: 1;
+	u64 defrag_opt		: 2;
+	u64 x2p_dst		: 1;
+	u64 ctx_push_size	: 7;
+	u64 rsvd_w0_55		: 1;
+	u64 ctx_hdr_size	: 2;
+	u64 aop_valid		: 1;
+	u64 rsvd_w0_59		: 1;
+	u64 ctx_size		: 4;
+
+	u64 rsvd_w1_31_0	: 32; /* W1 */
+	u64 cookie		: 32;
+
+	u64 sa_valid		: 1; /* W2 Control Word */
+	u64 sa_dir		: 1;
+	u64 rsvd_w2_2_3		: 2;
+	u64 ipsec_mode		: 1;
+	u64 ipsec_protocol	: 1;
+	u64 aes_key_len		: 2;
+	u64 enc_type		: 3;
+	u64 life_unit		: 1;
+	u64 auth_type		: 4;
+	u64 encap_type		: 2;
+	u64 et_ovrwr_ddr_en	: 1;
+	u64 esn_en		: 1;
+	u64 tport_l4_incr_csum	: 1;
+	u64 iphdr_verify	: 2;
+	u64 udp_ports_verify	: 1;
+	u64 l2_l3_hdr_on_error	: 1;
+	u64 rsvd_w25_31		: 7;
+	u64 spi			: 32;
+
+	u64 w3;			/* W3 */
+
+	u8 cipher_key[32];	/* W4 - W7 */
+	u32 rsvd_w8_0_31;	/* W8 : IV */
+	u32 iv_gcm_salt;
+	u64 rsvd_w9;		/* W9 */
+	u64 rsvd_w10;		/* W10 : UDP Encap */
+	u32 dest_ipaddr;	/* W11 - Tunnel mode: outer src and dest ipaddr */
+	u32 src_ipaddr;
+	u64 rsvd_w12_w30[19];	/* W12 - W30 */
+
+	u64 ar_base;		/* W31 */
+	u64 ar_valid_mask;	/* W32 */
+	u64 hard_sa_life;	/* W33 */
+	u64 soft_sa_life;	/* W34 */
+	u64 mib_octs;		/* W35 */
+	u64 mib_pkts;		/* W36 */
+	u64 ar_winbits;		/* W37 */
+
+	u64 rsvd_w38_w100[63];
+};
+
 /* CPT instruction parameter-1 */
 #define CN10K_IPSEC_INST_PARAM1_DIS_L4_CSUM		0x1
 #define CN10K_IPSEC_INST_PARAM1_DIS_L3_CSUM		0x2
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 09/14] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (7 preceding siblings ...)
  2025-08-19  2:14 ` [PATCH net-next v4 08/14] octeontx2-pf: ipsec: Allocate Ingress SA table Tanmay Jagdale
@ 2025-08-19  2:15 ` Tanmay Jagdale
  2025-08-19  2:15 ` [PATCH net-next v4 10/14] octeontx2-pf: ipsec: Handle NPA threshold interrupt Tanmay Jagdale
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:15 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

A incoming encrypted IPsec packet in the RVU NIX hardware needs
to be classified for inline fastpath processing and then assigned
a RQ and Aura pool before sending to CPT for decryption.

Create a dedicated RQ, Aura and Pool with the following setup
specifically for IPsec flows:
 - Set ipsech_en, ipsecd_drop_en in RQ context to enable hardware
   fastpath processing for IPsec flows.
 - Configure the dedicated Aura to raise an interrupt when
   it's buffer count drops below a threshold value so that the
   buffers can be replenished from the CPU.

The RQ, Aura and Pool contexts are initialized only when esp-hw-offload
feature is enabled via ethtool.

Also, move some of the RQ context macro definitions to otx2_common.h
so that they can be used in the IPsec driver as well.

Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- None
 
Changes in V3:
- Added a new function, cn10k_ipsec_free_aura_ptrs() that frees
  the allocated aura pointers during interface clean.

Changes in V2:
- Fixed logic to free pool in case of errors
- Spelling fixes

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-11-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-10-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-10-tanmay@marvell.com/

 .../marvell/octeontx2/nic/cn10k_ipsec.c       | 218 +++++++++++++++++-
 .../marvell/octeontx2/nic/cn10k_ipsec.h       |   9 +
 .../marvell/octeontx2/nic/otx2_common.c       |  23 +-
 .../marvell/octeontx2/nic/otx2_common.h       |  16 ++
 .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |   5 +
 5 files changed, 250 insertions(+), 21 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index ae2aa0b73e96..5558fb0d122f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -346,6 +346,186 @@ static int cn10k_outb_cpt_init(struct net_device *netdev)
 	return ret;
 }
 
+static int cn10k_ipsec_ingress_aura_init(struct otx2_nic *pfvf,
+					 struct otx2_pool *pool,
+					 int aura_id, int pool_id,
+					 int numptrs)
+{
+	struct npa_aq_enq_req *aq;
+
+	/* Initialize this aura's context via AF */
+	aq = otx2_mbox_alloc_msg_npa_aq_enq(&pfvf->mbox);
+	if (!aq)
+		return -ENOMEM;
+
+	aq->aura_id = aura_id;
+	/* Will be filled by AF with correct pool context address */
+	aq->aura.pool_addr = pool_id;
+	aq->aura.pool_caching = 1;
+	aq->aura.shift = ilog2(numptrs) - 8;
+	aq->aura.count = numptrs;
+	aq->aura.limit = numptrs;
+	aq->aura.avg_level = 255;
+	aq->aura.ena = 1;
+	aq->aura.fc_ena = 1;
+	aq->aura.fc_addr = pool->fc_addr->iova;
+	aq->aura.fc_hyst_bits = 0; /* Store count on all updates */
+	aq->aura.thresh_up = 1;
+	aq->aura.thresh = aq->aura.count / 4;
+	aq->aura.thresh_qint_idx = 0;
+
+	/* Enable backpressure for RQ aura */
+	if (!is_otx2_lbkvf(pfvf->pdev)) {
+		aq->aura.bp_ena = 0;
+		/* If NIX1 LF is attached then specify NIX1_RX.
+		 *
+		 * Below NPA_AURA_S[BP_ENA] is set according to the
+		 * NPA_BPINTF_E enumeration given as:
+		 * 0x0 + a*0x1 where 'a' is 0 for NIX0_RX and 1 for NIX1_RX so
+		 * NIX0_RX is 0x0 + 0*0x1 = 0
+		 * NIX1_RX is 0x0 + 1*0x1 = 1
+		 * But in HRM it is given that
+		 * "NPA_AURA_S[BP_ENA](w1[33:32]) - Enable aura backpressure to
+		 * NIX-RX based on [BP] level. One bit per NIX-RX; index
+		 * enumerated by NPA_BPINTF_E."
+		 */
+		if (pfvf->nix_blkaddr == BLKADDR_NIX1)
+			aq->aura.bp_ena = 1;
+#ifdef CONFIG_DCB
+		aq->aura.nix0_bpid = pfvf->bpid[pfvf->queue_to_pfc_map[aura_id]];
+#else
+		aq->aura.nix0_bpid = pfvf->bpid[0];
+#endif
+
+		/* Set backpressure level for RQ's Aura */
+		aq->aura.bp = RQ_BP_LVL_AURA;
+	}
+
+	/* Fill AQ info */
+	aq->ctype = NPA_AQ_CTYPE_AURA;
+	aq->op = NPA_AQ_INSTOP_INIT;
+
+	return otx2_sync_mbox_msg(&pfvf->mbox);
+}
+
+static int cn10k_ipsec_ingress_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura)
+{
+	struct nix_cn10k_aq_enq_req *aq;
+
+	/* Get memory to put this msg */
+	aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox);
+	if (!aq)
+		return -ENOMEM;
+
+	aq->rq.cq = qidx;
+	aq->rq.ena = 1;
+	aq->rq.pb_caching = 1;
+	aq->rq.lpb_aura = lpb_aura; /* Use large packet buffer aura */
+	aq->rq.lpb_sizem1 = (DMA_BUFFER_LEN(pfvf->rbsize) / 8) - 1;
+	aq->rq.xqe_imm_size = 0; /* Copying of packet to CQE not needed */
+	aq->rq.flow_tagw = 32; /* Copy full 32bit flow_tag to CQE header */
+	aq->rq.qint_idx = 0;
+	aq->rq.lpb_drop_ena = 1; /* Enable RED dropping for AURA */
+	aq->rq.lpb_aura_pass = RQ_PASS_LVL_AURA;
+	aq->rq.lpb_aura_drop = RQ_DROP_LVL_AURA;
+	aq->rq.ipsech_ena = 1;		/* IPsec HW fast path enable */
+	aq->rq.ipsecd_drop_ena = 1;	/* IPsec dynamic drop enable */
+	aq->rq.ena_wqwd = 1;		/* Store NIX header in packet buffer */
+	aq->rq.first_skip = 16;		/* Store packet after skipping 16x8
+					 * bytes to accommodate NIX header.
+					 */
+
+	/* Fill AQ info */
+	aq->qidx = qidx;
+	aq->ctype = NIX_AQ_CTYPE_RQ;
+	aq->op = NIX_AQ_INSTOP_INIT;
+
+	return otx2_sync_mbox_msg(&pfvf->mbox);
+}
+
+static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf)
+{
+	struct otx2_hw *hw = &pfvf->hw;
+	struct otx2_pool *pool = NULL;
+	int stack_pages, pool_id;
+	int err, ptr, num_ptrs;
+	dma_addr_t bufptr;
+
+	num_ptrs = pfvf->qset.rqe_cnt;
+	pool_id = pfvf->ipsec.inb_ipsec_pool;
+	stack_pages = (num_ptrs + hw->stack_pg_ptrs - 1) / hw->stack_pg_ptrs;
+	pool = &pfvf->qset.pool[pool_id];
+
+	/* Allocate memory for HW to update Aura count.
+	 * Alloc one cache line, so that it fits all FC_STYPE modes.
+	 */
+	if (!pool->fc_addr) {
+		err = qmem_alloc(pfvf->dev, &pool->fc_addr, 1, OTX2_ALIGN);
+		if (err)
+			return err;
+	}
+
+	mutex_lock(&pfvf->mbox.lock);
+
+	/* Initialize aura context */
+	err = cn10k_ipsec_ingress_aura_init(pfvf, pool, pool_id, pool_id,
+					    num_ptrs);
+	if (err)
+		goto fail;
+
+	/* Initialize pool */
+	err = otx2_pool_init(pfvf, pool_id, stack_pages, num_ptrs, pfvf->rbsize,
+			     AURA_NIX_RQ);
+	if (err)
+		goto fail;
+
+	/* Flush accumulated messages */
+	err = otx2_sync_mbox_msg(&pfvf->mbox);
+	if (err)
+		goto pool_fail;
+
+	/* Allocate pointers and free them to aura/pool */
+	for (ptr = 0; ptr < num_ptrs; ptr++) {
+		err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, ptr);
+		if (err) {
+			err = -ENOMEM;
+			goto pool_fail;
+		}
+		pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr + OTX2_HEAD_ROOM);
+	}
+
+	/* Initialize RQ and map buffers from pool_id */
+	err = cn10k_ipsec_ingress_rq_init(pfvf, pfvf->ipsec.inb_ipsec_rq, pool_id);
+	if (err)
+		goto pool_fail;
+
+	mutex_unlock(&pfvf->mbox.lock);
+	return 0;
+
+pool_fail:
+	qmem_free(pfvf->dev, pool->stack);
+	page_pool_destroy(pool->page_pool);
+fail:
+	mutex_unlock(&pfvf->mbox.lock);
+	otx2_mbox_reset(&pfvf->mbox.mbox, 0);
+	qmem_free(pfvf->dev, pool->fc_addr);
+	return err;
+}
+
+static int cn10k_inb_cpt_init(struct net_device *netdev)
+{
+	struct otx2_nic *pfvf = netdev_priv(netdev);
+	int ret = 0;
+
+	ret = cn10k_ipsec_setup_nix_rx_hw_resources(pfvf);
+	if (ret) {
+		netdev_err(netdev, "Failed to setup NIX HW resources for IPsec\n");
+		return ret;
+	}
+
+	return ret;
+}
+
 static int cn10k_outb_cpt_clean(struct otx2_nic *pf)
 {
 	int ret;
@@ -762,25 +942,51 @@ static void cn10k_ipsec_sa_wq_handler(struct work_struct *work)
 	rtnl_unlock();
 }
 
+void cn10k_ipsec_free_aura_ptrs(struct otx2_nic *pfvf)
+{
+	struct otx2_pool *pool;
+	int pool_id;
+	u64 iova;
+
+	pool_id = pfvf->ipsec.inb_ipsec_pool;
+	pool = &pfvf->qset.pool[pool_id];
+	do {
+		iova = otx2_aura_allocptr(pfvf, pool_id);
+		if (!iova)
+			break;
+		otx2_free_bufs(pfvf, pool, iova - OTX2_HEAD_ROOM,
+			       pfvf->rbsize);
+	} while (1);
+}
+
 int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable)
 {
 	struct otx2_nic *pf = netdev_priv(netdev);
+	int ret = 0;
 
 	/* IPsec offload supported on cn10k */
 	if (!is_dev_support_ipsec_offload(pf->pdev))
 		return -EOPNOTSUPP;
 
-	/* Initialize CPT for outbound ipsec offload */
-	if (enable)
-		return cn10k_outb_cpt_init(netdev);
+	/* Initialize CPT for outbound and inbound IPsec offload */
+	if (enable) {
+		ret = cn10k_outb_cpt_init(netdev);
+		if (ret)
+			return ret;
+
+		ret = cn10k_inb_cpt_init(netdev);
+		if (ret)
+			return ret;
+	}
 
 	/* Don't do CPT cleanup if SA installed */
-	if (pf->ipsec.outb_sa_count) {
+	if (!list_empty(&pf->ipsec.inb_sw_ctx_list) && !pf->ipsec.outb_sa_count) {
 		netdev_err(pf->netdev, "SA installed on this device\n");
 		return -EBUSY;
 	}
 
-	return cn10k_outb_cpt_clean(pf);
+	cn10k_ipsec_clean(pf);
+	return ret;
 }
 
 int cn10k_ipsec_init(struct net_device *netdev)
@@ -848,6 +1054,8 @@ void cn10k_ipsec_clean(struct otx2_nic *pf)
 
 	/* Free Ingress SA table */
 	qmem_free(pf->dev, pf->ipsec.inb_sa);
+
+	cn10k_ipsec_free_aura_ptrs(pf);
 }
 EXPORT_SYMBOL(cn10k_ipsec_clean);
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 7ffbbedf26d5..1b0faf789a38 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -111,6 +111,8 @@ struct cn10k_ipsec {
 	struct workqueue_struct *sa_workq;
 
 	/* For Inbound Inline IPSec flows */
+	u16 inb_ipsec_rq;
+	u16 inb_ipsec_pool;
 	u32 sa_tbl_entry_sz;
 	struct qmem *inb_sa;
 	struct list_head inb_sw_ctx_list;
@@ -324,6 +326,7 @@ bool otx2_sqe_add_sg_ipsec(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
 bool cn10k_ipsec_transmit(struct otx2_nic *pf, struct netdev_queue *txq,
 			  struct otx2_snd_queue *sq, struct sk_buff *skb,
 			  int num_segs, int size);
+void cn10k_ipsec_free_aura_ptrs(struct otx2_nic *pfvf);
 #else
 static inline __maybe_unused int cn10k_ipsec_init(struct net_device *netdev)
 {
@@ -354,5 +357,11 @@ cn10k_ipsec_transmit(struct otx2_nic *pf, struct netdev_queue *txq,
 {
 	return true;
 }
+
+static inline __maybe_unused
+void cn10k_ipsec_free_aura_ptrs(struct otx2_nic *pfvf)
+{
+}
+
 #endif
 #endif // CN10K_IPSEC_H
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index f674729124e6..268abddf2bec 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -869,22 +869,6 @@ void otx2_sqb_flush(struct otx2_nic *pfvf)
 	}
 }
 
-/* RED and drop levels of CQ on packet reception.
- * For CQ level is measure of emptiness ( 0x0 = full, 255 = empty).
- */
-#define RQ_PASS_LVL_CQ(skid, qsize)	((((skid) + 16) * 256) / (qsize))
-#define RQ_DROP_LVL_CQ(skid, qsize)	(((skid) * 256) / (qsize))
-
-/* RED and drop levels of AURA for packet reception.
- * For AURA level is measure of fullness (0x0 = empty, 255 = full).
- * Eg: For RQ length 1K, for pass/drop level 204/230.
- * RED accepts pkts if free pointers > 102 & <= 205.
- * Drops pkts if free pointers < 102.
- */
-#define RQ_BP_LVL_AURA   (255 - ((85 * 256) / 100)) /* BP when 85% is full */
-#define RQ_PASS_LVL_AURA (255 - ((95 * 256) / 100)) /* RED when 95% is full */
-#define RQ_DROP_LVL_AURA (255 - ((99 * 256) / 100)) /* Drop when 99% is full */
-
 int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura)
 {
 	struct otx2_qset *qset = &pfvf->qset;
@@ -1234,6 +1218,13 @@ int otx2_config_nix(struct otx2_nic *pfvf)
 	nixlf->rss_sz = MAX_RSS_INDIR_TBL_SIZE;
 	nixlf->rss_grps = MAX_RSS_GROUPS;
 	nixlf->xqe_sz = pfvf->hw.xqe_size == 128 ? NIX_XQESZ_W16 : NIX_XQESZ_W64;
+	/* Add an additional RQ for inline inbound IPsec flows
+	 * and store the RQ index for setting it up later when
+	 * IPsec offload is enabled via ethtool.
+	 */
+	nixlf->rq_cnt++;
+	pfvf->ipsec.inb_ipsec_rq = pfvf->hw.rx_queues;
+
 	/* We don't know absolute NPA LF idx attached.
 	 * AF will replace 'RVU_DEFAULT_PF_FUNC' with
 	 * NPA LF attached to this RVU PF/VF.
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index e3765b73c434..3104d15623b0 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -84,6 +84,22 @@ enum arua_mapped_qtypes {
 /* Send skid of 2000 packets required for CQ size of 4K CQEs. */
 #define SEND_CQ_SKID	2000
 
+/* RED and drop levels of CQ on packet reception.
+ * For CQ level is measure of emptiness ( 0x0 = full, 255 = empty).
+ */
+#define RQ_PASS_LVL_CQ(skid, qsize)	((((skid) + 16) * 256) / (qsize))
+#define RQ_DROP_LVL_CQ(skid, qsize)	(((skid) * 256) / (qsize))
+
+/* RED and drop levels of AURA for packet reception.
+ * For AURA level is measure of fullness (0x0 = empty, 255 = full).
+ * Eg: For RQ length 1K, for pass/drop level 204/230.
+ * RED accepts pkts if free pointers > 102 & <= 205.
+ * Drops pkts if free pointers < 102.
+ */
+#define RQ_BP_LVL_AURA   (255 - ((85 * 256) / 100)) /* BP when 85% is full */
+#define RQ_PASS_LVL_AURA (255 - ((95 * 256) / 100)) /* RED when 95% is full */
+#define RQ_DROP_LVL_AURA (255 - ((99 * 256) / 100)) /* Drop when 99% is full */
+
 #define OTX2_GET_RX_STATS(reg) \
 	otx2_read64(pfvf, NIX_LF_RX_STATX(reg))
 #define OTX2_GET_TX_STATS(reg) \
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index b23585c5e5c2..ceae1104cfb2 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1627,6 +1627,10 @@ int otx2_init_hw_resources(struct otx2_nic *pf)
 	hw->sqpool_cnt = otx2_get_total_tx_queues(pf);
 	hw->pool_cnt = hw->rqpool_cnt + hw->sqpool_cnt;
 
+	/* Increase pool count by 1 for ingress inline IPsec */
+	pf->ipsec.inb_ipsec_pool = hw->pool_cnt;
+	hw->pool_cnt++;
+
 	if (!otx2_rep_dev(pf->pdev)) {
 		/* Maximum hardware supported transmit length */
 		pf->tx_max_pktlen = pf->netdev->max_mtu + OTX2_ETH_HLEN;
@@ -1792,6 +1796,7 @@ void otx2_free_hw_resources(struct otx2_nic *pf)
 
 	/* Free RQ buffer pointers*/
 	otx2_free_aura_ptr(pf, AURA_NIX_RQ);
+	cn10k_ipsec_free_aura_ptrs(pf);
 
 	otx2_free_cq_res(pf);
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 10/14] octeontx2-pf: ipsec: Handle NPA threshold interrupt
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (8 preceding siblings ...)
  2025-08-19  2:15 ` [PATCH net-next v4 09/14] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows Tanmay Jagdale
@ 2025-08-19  2:15 ` Tanmay Jagdale
  2025-08-19  2:15 ` [PATCH net-next v4 11/14] octeontx2-pf: ipsec: Initialize ingress IPsec Tanmay Jagdale
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:15 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

The NPA Aura pool that is dedicated for 1st pass inline IPsec flows
raises an interrupt when the buffers of that aura_id drop below a
threshold value.

Add the following changes to handle this interrupt
- Increase the number of MSIX vectors requested for the PF/VF to
  include NPA vector.
- Create a workqueue (refill_npa_inline_ipsecq) to allocate and
  refill buffers to the pool.
- When the interrupt is raised, schedule the workqueue entry,
  cn10k_ipsec_npa_refill_inb_ipsecq(), where the current count of
  consumed buffers is determined via NPA_LF_AURA_OP_CNT and then
  replenished.

Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- None

Changes in V3:
- Dropped the unused 'ptr' variable in cn10k_inb_cpt_init().
- Use FIELD_PREP macros
- Reduced the number of MSIX vectors requested for NPA
- Disabled the NPA threshold interrupt in cn10k_ipsec_free_aura_ptrs()

Changes in V2:
- Fixed sparse warnings

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-12-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-11-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-11-tanmay@marvell.com/

 .../marvell/octeontx2/nic/cn10k_ipsec.c       | 104 +++++++++++++++++-
 .../marvell/octeontx2/nic/cn10k_ipsec.h       |   1 +
 .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |   4 +
 .../ethernet/marvell/octeontx2/nic/otx2_reg.h |   5 +
 .../ethernet/marvell/octeontx2/nic/otx2_vf.c  |   4 +
 5 files changed, 116 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index 5558fb0d122f..d5229cc17d2e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -512,10 +512,72 @@ static int cn10k_ipsec_setup_nix_rx_hw_resources(struct otx2_nic *pfvf)
 	return err;
 }
 
+static void cn10k_ipsec_npa_refill_inb_ipsecq(struct work_struct *work)
+{
+	struct cn10k_ipsec *ipsec = container_of(work, struct cn10k_ipsec,
+						 refill_npa_inline_ipsecq);
+	struct otx2_nic *pfvf = container_of(ipsec, struct otx2_nic, ipsec);
+	struct otx2_pool *pool = NULL;
+	int err, pool_id, idx;
+	void __iomem *ptr;
+	dma_addr_t bufptr;
+	u64 val, count;
+
+	val = otx2_read64(pfvf, NPA_LF_QINTX_INT(0));
+	if (!(val & 1))
+		return;
+
+	ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_INT);
+	val = otx2_atomic64_add(((u64)pfvf->ipsec.inb_ipsec_pool << 44), ptr);
+
+	/* Refill buffers only on a threshold interrupt */
+	if (!(val & NPA_LF_AURA_OP_INT_THRESH_INT))
+		return;
+
+	local_bh_disable();
+
+	/* Get the current number of buffers consumed */
+	ptr = otx2_get_regaddr(pfvf, NPA_LF_AURA_OP_CNT);
+	count = otx2_atomic64_add(((u64)pfvf->ipsec.inb_ipsec_pool << 44), ptr);
+	count &= GENMASK_ULL(35, 0);
+
+	/* Allocate and refill to the IPsec pool */
+	pool_id = pfvf->ipsec.inb_ipsec_pool;
+	pool = &pfvf->qset.pool[pool_id];
+
+	for (idx = 0; idx < count; idx++) {
+		err = otx2_alloc_rbuf(pfvf, pool, &bufptr, pool_id, idx);
+		if (err) {
+			netdev_err(pfvf->netdev,
+				   "Insufficient memory for IPsec pool buffers\n");
+			break;
+		}
+		pfvf->hw_ops->aura_freeptr(pfvf, pool_id, bufptr + OTX2_HEAD_ROOM);
+	}
+
+	/* Clear/ACK Interrupt */
+	val = FIELD_PREP(NPA_LF_AURA_OP_INT_AURA, pfvf->ipsec.inb_ipsec_pool);
+	val |= NPA_LF_AURA_OP_INT_THRESH_INT;
+	otx2_write64(pfvf, NPA_LF_AURA_OP_INT, val);
+
+	local_bh_enable();
+}
+
+static irqreturn_t cn10k_ipsec_npa_inb_ipsecq_intr_handler(int irq, void *data)
+{
+	struct otx2_nic *pf = data;
+
+	schedule_work(&pf->ipsec.refill_npa_inline_ipsecq);
+
+	return IRQ_HANDLED;
+}
+
 static int cn10k_inb_cpt_init(struct net_device *netdev)
 {
 	struct otx2_nic *pfvf = netdev_priv(netdev);
-	int ret = 0;
+	int ret = 0, vec;
+	char *irq_name;
+	u64 val;
 
 	ret = cn10k_ipsec_setup_nix_rx_hw_resources(pfvf);
 	if (ret) {
@@ -523,6 +585,34 @@ static int cn10k_inb_cpt_init(struct net_device *netdev)
 		return ret;
 	}
 
+	/* Work entry for refilling the NPA queue for ingress inline IPSec */
+	INIT_WORK(&pfvf->ipsec.refill_npa_inline_ipsecq,
+		  cn10k_ipsec_npa_refill_inb_ipsecq);
+
+	/* Register NPA interrupt */
+	vec = pfvf->hw.npa_msixoff;
+	irq_name = &pfvf->hw.irq_name[vec * NAME_SIZE];
+	snprintf(irq_name, NAME_SIZE, "%s-npa-qint", pfvf->netdev->name);
+
+	ret = request_irq(pci_irq_vector(pfvf->pdev, vec),
+			  cn10k_ipsec_npa_inb_ipsecq_intr_handler, 0,
+			  irq_name, pfvf);
+	if (ret) {
+		dev_err(pfvf->dev,
+			"RVUPF%d: IRQ registration failed for NPA QINT\n",
+			rvu_get_pf(pfvf->pdev, pfvf->pcifunc));
+		return ret;
+	}
+
+	/* Enable NPA threshold interrupt */
+	val = FIELD_PREP(NPA_LF_AURA_OP_INT_AURA, pfvf->ipsec.inb_ipsec_pool);
+	val |= NPA_LF_AURA_OP_INT_SETOP;
+	val |= NPA_LF_AURA_OP_INT_THRESH_ENA;
+	otx2_write64(pfvf, NPA_LF_AURA_OP_INT, val);
+
+	/* Enable interrupt */
+	otx2_write64(pfvf, NPA_LF_QINTX_ENA_W1S(0), BIT_ULL(0));
+
 	return ret;
 }
 
@@ -946,7 +1036,12 @@ void cn10k_ipsec_free_aura_ptrs(struct otx2_nic *pfvf)
 {
 	struct otx2_pool *pool;
 	int pool_id;
-	u64 iova;
+	u64 iova, val;
+
+	/* Disable threshold interrupt */
+	val = FIELD_PREP(NPA_LF_AURA_OP_INT_AURA, pfvf->ipsec.inb_ipsec_pool);
+	val |= NPA_LF_AURA_OP_INT_THRESH_ENA;
+	otx2_write64(pfvf, NPA_LF_AURA_OP_INT, val);
 
 	pool_id = pfvf->ipsec.inb_ipsec_pool;
 	pool = &pfvf->qset.pool[pool_id];
@@ -1039,6 +1134,8 @@ EXPORT_SYMBOL(cn10k_ipsec_init);
 
 void cn10k_ipsec_clean(struct otx2_nic *pf)
 {
+	int vec;
+
 	if (!is_dev_support_ipsec_offload(pf->pdev))
 		return;
 
@@ -1056,6 +1153,9 @@ void cn10k_ipsec_clean(struct otx2_nic *pf)
 	qmem_free(pf->dev, pf->ipsec.inb_sa);
 
 	cn10k_ipsec_free_aura_ptrs(pf);
+
+	vec = pci_irq_vector(pf->pdev, pf->hw.npa_msixoff);
+	free_irq(vec, pf);
 }
 EXPORT_SYMBOL(cn10k_ipsec_clean);
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 1b0faf789a38..7eb4ca36c14a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -117,6 +117,7 @@ struct cn10k_ipsec {
 	struct qmem *inb_sa;
 	struct list_head inb_sw_ctx_list;
 	DECLARE_BITMAP(inb_sa_table, CN10K_IPSEC_INB_MAX_SA);
+	struct work_struct refill_npa_inline_ipsecq;
 };
 
 /* CN10K IPSEC Security Association (SA) */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index ceae1104cfb2..d1e77ea7b290 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -2995,6 +2995,10 @@ int otx2_realloc_msix_vectors(struct otx2_nic *pf)
 	num_vec = hw->nix_msixoff;
 	num_vec += NIX_LF_CINT_VEC_START + hw->max_queues;
 
+	/* Update number of vectors to include NPA */
+	if (hw->nix_msixoff < hw->npa_msixoff)
+		num_vec = hw->npa_msixoff;
+
 	otx2_disable_mbox_intr(pf);
 	pci_free_irq_vectors(hw->pdev);
 	err = pci_alloc_irq_vectors(hw->pdev, num_vec, num_vec, PCI_IRQ_MSIX);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
index 1cd576fd09c5..d270f96c5a3c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
@@ -109,6 +109,11 @@
 #define NPA_LF_QINTX_ENA_W1C(a)         (NPA_LFBASE | 0x330 | (a) << 12)
 #define NPA_LF_AURA_BATCH_FREE0         (NPA_LFBASE | 0x400)
 
+#define NPA_LF_AURA_OP_INT_THRESH_INT	BIT_ULL(16)
+#define NPA_LF_AURA_OP_INT_THRESH_ENA	BIT_ULL(17)
+#define NPA_LF_AURA_OP_INT_SETOP	BIT_ULL(43)
+#define NPA_LF_AURA_OP_INT_AURA		GENMASK_ULL(63, 44)
+
 /* NIX LF registers */
 #define	NIX_LFBASE			(BLKTYPE_NIX << RVU_FUNC_BLKADDR_SHIFT)
 #define	NIX_LF_RX_SECRETX(a)		(NIX_LFBASE | 0x0 | (a) << 3)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
index 5589fccd370b..951d5c17c75d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
@@ -547,6 +547,10 @@ static int otx2vf_realloc_msix_vectors(struct otx2_nic *vf)
 	num_vec = hw->nix_msixoff;
 	num_vec += NIX_LF_CINT_VEC_START + hw->max_queues;
 
+	/* Update number of vectors to include NPA */
+	if (hw->nix_msixoff < hw->npa_msixoff)
+		num_vec = hw->npa_msixoff;
+
 	otx2vf_disable_mbox_intr(vf);
 	pci_free_irq_vectors(hw->pdev);
 	err = pci_alloc_irq_vectors(hw->pdev, num_vec, num_vec, PCI_IRQ_MSIX);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 11/14] octeontx2-pf: ipsec: Initialize ingress IPsec
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (9 preceding siblings ...)
  2025-08-19  2:15 ` [PATCH net-next v4 10/14] octeontx2-pf: ipsec: Handle NPA threshold interrupt Tanmay Jagdale
@ 2025-08-19  2:15 ` Tanmay Jagdale
  2025-08-19  2:15 ` [PATCH net-next v4 12/14] octeontx2-pf: ipsec: Process CPT metapackets Tanmay Jagdale
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:15 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

Initialize ingress inline IPsec offload when ESP offload feature
is enabled via Ethtool. As part of initialization, the following
mailboxes must be invoked to configure inline IPsec:

NIX_INLINE_IPSEC_LF_CFG - Every NIX LF has the provision to maintain a
                          contiguous SA Table. This mailbox configure
                          the SA table base address, size of each SA,
                          maximum number entries in the table. Currently,
                          we support 128 entry table with each SA of size
                          1024 bytes.

NIX_LF_INLINE_RQ_CFG    - Post decryption, CPT sends a metapacket of 256
                          bytes which have enough packet headers to help
                          NIX RX classify it. However, since the packet is
                          not complete, we cannot perform checksum and
                          packet length verification. Hence, configure the
                          RQ context to disable L3, L4 checksum and length
                          verification for packets coming from CPT.

NIX_INLINE_IPSEC_CFG    - RVU hardware supports 1 common CPT LF for inbound
                          ingress IPsec flows. This CPT LF is configured
			  via this mailbox and is a one time system-wide
                          configuration.

NIX_ALLOC_BPID          - Configure bacpkpressure between NIX and CPT
			  blocks by allocating a backpressure ID using
			  this mailbox for the ingress inline IPsec flows.

NIX_FREE_BPID           - Free this BPID when ESP offload is disabled
			  via ethtool.

Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- Moved BPID configuration before initializing CPT for inbound
  configuration
 
Changes in V3:
- None

Changes in V2:
- Fixed commit message be within 75 characters

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-13-tanmay@marvell.com/ 
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-12-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-12-tanmay@marvell.com/

 .../marvell/octeontx2/nic/cn10k_ipsec.c       | 171 +++++++++++++++++-
 .../marvell/octeontx2/nic/cn10k_ipsec.h       |   2 +
 2 files changed, 171 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index d5229cc17d2e..550a8da04f1f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -346,6 +346,100 @@ static int cn10k_outb_cpt_init(struct net_device *netdev)
 	return ret;
 }
 
+static int cn10k_inb_nix_inline_lf_cfg(struct otx2_nic *pfvf)
+{
+	struct nix_inline_ipsec_lf_cfg *req;
+	int ret = 0;
+
+	mutex_lock(&pfvf->mbox.lock);
+	req = otx2_mbox_alloc_msg_nix_inline_ipsec_lf_cfg(&pfvf->mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	req->sa_base_addr = pfvf->ipsec.inb_sa->iova;
+	req->ipsec_cfg0.tag_const = 0;
+	req->ipsec_cfg0.tt = 0;
+	req->ipsec_cfg0.lenm1_max = 11872; /* (Max packet size - 128 (first skip)) */
+	req->ipsec_cfg0.sa_pow2_size = 0xb; /* 2048 */
+	req->ipsec_cfg1.sa_idx_max = CN10K_IPSEC_INB_MAX_SA - 1;
+	req->ipsec_cfg1.sa_idx_w = 0x7;
+	req->enable = 1;
+
+	ret = otx2_sync_mbox_msg(&pfvf->mbox);
+error:
+	mutex_unlock(&pfvf->mbox.lock);
+	return ret;
+}
+
+static int cn10k_inb_nix_inline_lf_rq_cfg(struct otx2_nic *pfvf)
+{
+	struct nix_rq_cpt_field_mask_cfg_req *req;
+	int ret = 0, i;
+
+	mutex_lock(&pfvf->mbox.lock);
+	req = otx2_mbox_alloc_msg_nix_lf_inline_rq_cfg(&pfvf->mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	for (i = 0; i < RQ_CTX_MASK_MAX; i++)
+		req->rq_ctx_word_mask[i] = 0xffffffffffffffff;
+
+	req->rq_set.len_ol3_dis = 1;
+	req->rq_set.len_ol4_dis = 1;
+	req->rq_set.len_il3_dis = 1;
+
+	req->rq_set.len_il4_dis = 1;
+	req->rq_set.csum_ol4_dis = 1;
+	req->rq_set.csum_il4_dis = 1;
+
+	req->rq_set.lenerr_dis = 1;
+	req->rq_set.port_ol4_dis = 1;
+	req->rq_set.port_il4_dis = 1;
+
+	req->rq_set.lpb_drop_ena = 0;
+	req->rq_set.xqe_drop_ena = 0;
+
+	req->ipsec_cfg1.rq_mask_enable = 1;
+	req->ipsec_cfg1.spb_cpt_enable = 0;
+
+	ret = otx2_sync_mbox_msg(&pfvf->mbox);
+error:
+	mutex_unlock(&pfvf->mbox.lock);
+	return ret;
+}
+
+static int cn10k_inb_nix_inline_ipsec_cfg(struct otx2_nic *pfvf)
+{
+	struct cpt_rx_inline_lf_cfg_msg *req;
+	int ret = 0;
+
+	mutex_lock(&pfvf->mbox.lock);
+	req = otx2_mbox_alloc_msg_cpt_rx_inline_lf_cfg(&pfvf->mbox);
+	if (!req) {
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	req->sso_pf_func = 0;
+	req->opcode = CN10K_IPSEC_MAJOR_OP_INB_IPSEC | (1 << 6);
+	req->param1 = 7; /* bit 0:ip_csum_dis 1:tcp_csum_dis 2:esp_trailer_dis */
+	req->param2 = 0;
+	req->bpid = pfvf->ipsec.bpid;
+	req->credit = pfvf->qset.rqe_cnt;
+	req->credit_th = 100;
+	req->ctx_ilen_valid = 1;
+	req->ctx_ilen = 5;
+
+	ret = otx2_sync_mbox_msg(&pfvf->mbox);
+error:
+	mutex_unlock(&pfvf->mbox.lock);
+	return ret;
+}
+
 static int cn10k_ipsec_ingress_aura_init(struct otx2_nic *pfvf,
 					 struct otx2_pool *pool,
 					 int aura_id, int pool_id,
@@ -613,6 +707,28 @@ static int cn10k_inb_cpt_init(struct net_device *netdev)
 	/* Enable interrupt */
 	otx2_write64(pfvf, NPA_LF_QINTX_ENA_W1S(0), BIT_ULL(0));
 
+	/* Enable inbound inline IPSec in NIX LF */
+	ret = cn10k_inb_nix_inline_lf_cfg(pfvf);
+	if (ret) {
+		netdev_err(netdev, "Error configuring NIX for Inline IPSec\n");
+		goto out;
+	}
+
+	/* IPsec specific RQ settings in NIX LF */
+	ret = cn10k_inb_nix_inline_lf_rq_cfg(pfvf);
+	if (ret) {
+		netdev_err(netdev, "Error configuring NIX for Inline IPSec\n");
+		goto out;
+	}
+
+	/* One-time configuration to enable CPT LF for inline inbound IPSec */
+	ret = cn10k_inb_nix_inline_ipsec_cfg(pfvf);
+	if (ret && ret != -EEXIST)
+		netdev_err(netdev, "CPT LF configuration error\n");
+	else
+		ret = 0;
+
+out:
 	return ret;
 }
 
@@ -1054,6 +1170,53 @@ void cn10k_ipsec_free_aura_ptrs(struct otx2_nic *pfvf)
 	} while (1);
 }
 
+static int cn10k_ipsec_configure_cpt_bpid(struct otx2_nic *pfvf)
+{
+	struct nix_alloc_bpid_req *req;
+	struct nix_bpids *rsp;
+	int rc;
+
+	req = otx2_mbox_alloc_msg_nix_alloc_bpids(&pfvf->mbox);
+	if (!req)
+		return -ENOMEM;
+	req->bpid_cnt = 1;
+	req->type = NIX_INTF_TYPE_CPT;
+
+	rc = otx2_sync_mbox_msg(&pfvf->mbox);
+	if (rc)
+		return rc;
+
+	rsp = (struct nix_bpids *)otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+	if (IS_ERR(rsp))
+		return PTR_ERR(rsp);
+
+	/* Store the bpid for configuring it in the future */
+	pfvf->ipsec.bpid = rsp->bpids[0];
+
+	return 0;
+}
+
+static int cn10k_ipsec_free_cpt_bpid(struct otx2_nic *pfvf)
+{
+	struct nix_bpids *req;
+	int rc;
+
+	req = otx2_mbox_alloc_msg_nix_free_bpids(&pfvf->mbox);
+	if (!req)
+		return -ENOMEM;
+
+	req->bpid_cnt = 1;
+	req->bpids[0] = pfvf->ipsec.bpid;
+
+	rc = otx2_sync_mbox_msg(&pfvf->mbox);
+	if (rc)
+		return rc;
+
+	/* Clear the bpid */
+	pfvf->ipsec.bpid = 0;
+	return 0;
+}
+
 int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable)
 {
 	struct otx2_nic *pf = netdev_priv(netdev);
@@ -1069,9 +1232,11 @@ int cn10k_ipsec_ethtool_init(struct net_device *netdev, bool enable)
 		if (ret)
 			return ret;
 
+		/* Configure NIX <-> CPT backpresure */
+		ret = cn10k_ipsec_configure_cpt_bpid(pf);
+
 		ret = cn10k_inb_cpt_init(netdev);
-		if (ret)
-			return ret;
+		return ret;
 	}
 
 	/* Don't do CPT cleanup if SA installed */
@@ -1156,6 +1321,8 @@ void cn10k_ipsec_clean(struct otx2_nic *pf)
 
 	vec = pci_irq_vector(pf->pdev, pf->hw.npa_msixoff);
 	free_irq(vec, pf);
+
+	cn10k_ipsec_free_cpt_bpid(pf);
 }
 EXPORT_SYMBOL(cn10k_ipsec_clean);
 
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 7eb4ca36c14a..80bc0e4a9da6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -104,6 +104,8 @@ struct cn10k_ipsec {
 	atomic_t cpt_state;
 	struct cn10k_cpt_inst_queue iq;
 
+	u32 bpid;	/* Backpressure ID for NIX <-> CPT */
+
 	/* SA info */
 	u32 sa_size;
 	u32 outb_sa_count;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 12/14] octeontx2-pf: ipsec: Process CPT metapackets
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (10 preceding siblings ...)
  2025-08-19  2:15 ` [PATCH net-next v4 11/14] octeontx2-pf: ipsec: Initialize ingress IPsec Tanmay Jagdale
@ 2025-08-19  2:15 ` Tanmay Jagdale
  2025-08-19  2:15 ` [PATCH net-next v4 13/14] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries Tanmay Jagdale
  2025-08-19  2:15 ` [PATCH net-next v4 14/14] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows Tanmay Jagdale
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:15 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

CPT hardware forwards decrypted IPsec packets to NIX via the X2P bus
as metapackets which are of 256 bytes in length. Each metapacket
contains CPT_PARSE_HDR_S and initial bytes of the decrypted packet
that helps NIX RX in classifying and submitting to CPU. Additionally,
CPT also sets BIT(11) of the channel number to indicate that it's a
2nd pass packet from CPT.

Since the metapackets are not complete packets, they don't have to go
through L3/L4 layer length and checksum verification so these are
disabled via the NIX_LF_INLINE_RQ_CFG mailbox during IPsec initialization.

The CPT_PARSE_HDR_S contains a WQE pointer to the complete decrypted
packet. Add code in the rx NAPI handler to parse the header and extract
WQE pointer. Later, use this WQE pointer to construct the skb, set the
XFRM packet mode flags to indicate successful decryption before submitting
it to the network stack.

Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- Updated cpt_parse_hdr_s to 4 u64 words
- Switched to using FIELD_GET macros for extracting fields withing
  cpt_parse_hdr_s
- With above changes, all the sparse warnings are now resolved

Changes in V3:
- Updated cpt_parse_hdr_s structure to use __be64 type

Changes in V2:
- Removed unnecessary casts
- Don't convert complete cpt_parse_hdr from BE to LE and just
  convert required fields
- Fixed logic to avoid repeated calculation for start and end in sg

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-15-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-13-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-13-tanmay@marvell.com/ 

 .../marvell/octeontx2/nic/cn10k_ipsec.c       | 50 +++++++++++++++++++
 .../marvell/octeontx2/nic/cn10k_ipsec.h       | 23 +++++++++
 .../marvell/octeontx2/nic/otx2_struct.h       | 16 ++++++
 .../marvell/octeontx2/nic/otx2_txrx.c         | 27 +++++++++-
 4 files changed, 115 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index 550a8da04f1f..81610774e7b6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -346,6 +346,56 @@ static int cn10k_outb_cpt_init(struct net_device *netdev)
 	return ret;
 }
 
+struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
+						     struct sk_buff *skb,
+						     dma_addr_t seg_addr)
+{
+	struct nix_wqe_rx_s *wqe = NULL;
+	struct cpt_parse_hdr_s *cptp;
+	struct xfrm_offload *xo;
+	struct xfrm_state *xs;
+	struct sec_path *sp;
+	dma_addr_t wqe_iova;
+	u32 sa_index;
+	u64 *sa_ptr;
+
+	/* CPT_PARSE_HDR_S is present in the beginning of the buffer */
+	cptp = phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain, seg_addr));
+
+	/* Convert the wqe_ptr from CPT_PARSE_HDR_S to a CPU usable pointer */
+	wqe_iova = FIELD_GET(CPT_PARSE_HDR_W1_WQE_PTR, cptp->w1);
+	wqe = phys_to_virt(otx2_iova_to_phys(pfvf->iommu_domain,
+					     be64_to_cpu((__force __be64)wqe_iova)));
+
+	/* Get the XFRM state pointer stored in SA context */
+	sa_index = FIELD_GET(CPT_PARSE_HDR_W0_COOKIE, cptp->w0);
+	sa_ptr = pfvf->ipsec.inb_sa->base +
+		 (be32_to_cpu((__force __be32)sa_index) * pfvf->ipsec.sa_tbl_entry_sz) + 1024;
+	xs = (struct xfrm_state *)*sa_ptr;
+
+	/* Set XFRM offload status and flags for successful decryption */
+	sp = secpath_set(skb);
+	if (!sp) {
+		netdev_err(pfvf->netdev, "Failed to secpath_set\n");
+		wqe = NULL;
+		goto err_out;
+	}
+
+	rcu_read_lock();
+	xfrm_state_hold(xs);
+	rcu_read_unlock();
+
+	sp->xvec[sp->len++] = xs;
+	sp->olen++;
+
+	xo = xfrm_offload(skb);
+	xo->flags = CRYPTO_DONE;
+	xo->status = CRYPTO_SUCCESS;
+
+err_out:
+	return wqe;
+}
+
 static int cn10k_inb_nix_inline_lf_cfg(struct otx2_nic *pfvf)
 {
 	struct nix_inline_ipsec_lf_cfg *req;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 80bc0e4a9da6..7c1e24e21ea3 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -8,6 +8,7 @@
 #define CN10K_IPSEC_H
 
 #include <linux/types.h>
+#include "otx2_struct.h"
 
 DECLARE_STATIC_KEY_FALSE(cn10k_ipsec_sa_enabled);
 
@@ -302,6 +303,18 @@ struct cpt_sg_s {
 	u64 rsvd_63_50	: 14;
 };
 
+/* CPT Parse Header Structure for Inbound packets */
+struct cpt_parse_hdr_s {
+	u64 w0;
+	u64 w1;
+	u64 w2;
+	u64 w3;
+};
+
+/* Macros to get specific fields from CPT_PARSE_HDR_S*/
+#define CPT_PARSE_HDR_W0_COOKIE		GENMASK_ULL(63, 32)
+#define CPT_PARSE_HDR_W1_WQE_PTR	GENMASK_ULL(63, 0)
+
 /* CPT LF_INPROG Register */
 #define CPT_LF_INPROG_INFLIGHT	GENMASK_ULL(8, 0)
 #define CPT_LF_INPROG_GRB_CNT	GENMASK_ULL(39, 32)
@@ -330,6 +343,9 @@ bool cn10k_ipsec_transmit(struct otx2_nic *pf, struct netdev_queue *txq,
 			  struct otx2_snd_queue *sq, struct sk_buff *skb,
 			  int num_segs, int size);
 void cn10k_ipsec_free_aura_ptrs(struct otx2_nic *pfvf);
+struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
+						     struct sk_buff *skb,
+						     dma_addr_t seg_addr);
 #else
 static inline __maybe_unused int cn10k_ipsec_init(struct net_device *netdev)
 {
@@ -366,5 +382,12 @@ void cn10k_ipsec_free_aura_ptrs(struct otx2_nic *pfvf)
 {
 }
 
+static inline __maybe_unused
+struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
+						     struct sk_buff *skb,
+						     dma_addr_t seg_addr)
+{
+	return NULL;
+}
 #endif
 #endif // CN10K_IPSEC_H
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h
index 4e5899d8fa2e..506fab414b7e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_struct.h
@@ -175,6 +175,22 @@ struct nix_cqe_tx_s {
 	struct nix_send_comp_s comp;
 };
 
+/* NIX WQE header structure */
+struct nix_wqe_hdr_s {
+	u64 flow_tag              : 32;
+	u64 tt                    : 2;
+	u64 reserved_34_43        : 10;
+	u64 node                  : 2;
+	u64 q                     : 14;
+	u64 wqe_type              : 4;
+};
+
+struct nix_wqe_rx_s {
+	struct nix_wqe_hdr_s	hdr;
+	struct nix_rx_parse_s	parse;
+	struct nix_rx_sg_s	sg;
+};
+
 /* NIX SQE header structure */
 struct nix_sqe_hdr_s {
 	u64 total		: 18; /* W0 */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
index 625bb5a05344..6cffc60a443c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
@@ -205,11 +205,16 @@ static bool otx2_skb_add_frag(struct otx2_nic *pfvf, struct sk_buff *skb,
 		}
 	}
 
+	if (parse->chan & 0x800)
+		off = 0;
+
 	page = virt_to_page(va);
 	if (likely(skb_shinfo(skb)->nr_frags < MAX_SKB_FRAGS)) {
 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
 				va - page_address(page) + off,
 				len - off, pfvf->rbsize);
+		if (parse->chan & 0x800)
+			return false;
 		return true;
 	}
 
@@ -333,6 +338,8 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
 				 struct nix_cqe_rx_s *cqe, bool *need_xdp_flush)
 {
 	struct nix_rx_parse_s *parse = &cqe->parse;
+	struct nix_wqe_rx_s *orig_pkt_wqe = NULL;
+	u32 desc_sizem1 = parse->desc_sizem1;
 	struct nix_rx_sg_s *sg = &cqe->sg;
 	struct sk_buff *skb = NULL;
 	u64 *word = (u64 *)parse;
@@ -359,8 +366,26 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
 	if (unlikely(!skb))
 		return;
 
+	if (parse->chan & 0x800) {
+		orig_pkt_wqe = cn10k_ipsec_process_cpt_metapkt(pfvf, skb, sg->seg_addr);
+		if (!orig_pkt_wqe) {
+			netdev_err(pfvf->netdev, "Invalid WQE in CPT metapacket\n");
+			napi_free_frags(napi);
+			cq->pool_ptrs++;
+			return;
+		}
+		/* Return metapacket buffer back to pool since it's no longer needed */
+		otx2_free_rcv_seg(pfvf, cqe, cq->cq_idx);
+
+		/* Switch *sg to the orig_pkt_wqe's *sg which has the actual
+		 * complete decrypted packet by CPT.
+		 */
+		sg = &orig_pkt_wqe->sg;
+		desc_sizem1 = orig_pkt_wqe->parse.desc_sizem1;
+	}
+
 	start = (void *)sg;
-	end = start + ((cqe->parse.desc_sizem1 + 1) * 16);
+	end = start + ((desc_sizem1 + 1) * 16);
 	while (start < end) {
 		sg = (struct nix_rx_sg_s *)start;
 		seg_addr = &sg->seg_addr;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 13/14] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (11 preceding siblings ...)
  2025-08-19  2:15 ` [PATCH net-next v4 12/14] octeontx2-pf: ipsec: Process CPT metapackets Tanmay Jagdale
@ 2025-08-19  2:15 ` Tanmay Jagdale
  2025-08-19 16:10   ` kernel test robot
  2025-08-19  2:15 ` [PATCH net-next v4 14/14] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows Tanmay Jagdale
  13 siblings, 1 reply; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:15 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

NPC rule for IPsec flows
------------------------
Incoming IPsec packets are first classified for hardware fastpath
processing in the NPC block. Hence, allocate an MCAM entry in NPC
using the MCAM_ALLOC_ENTRY mailbox to add a rule for IPsec flow
classification.

Then, install an NPC rule at this entry for packet classification
based on ESP header and SPI value with match action as UCAST_IPSEC.
Also, these packets need to be directed to the dedicated receive
queue so provide the RQ index as part of NPC_INSTALL_FLOW mailbox.
Add a function to delete NPC rule as well.

SPI-to-SA match table
---------------------
NIX RX maintains a common hash table for matching the SPI value from
in ESP packet to the SA index associated with it. This table has 2K entries
with 4 ways. When a packet is received with action as UCAST_IPSEC, NIXRX
uses the SPI from the packet header to perform lookup in the SPI-to-SA
hash table. This lookup, if successful, returns an SA index that is used
by NIXRX to calculate the exact SA context address and programs it in
the CPT_INST_S before submitting the packet to CPT for decryption.

Add functions to install the delete an entry from this table via the
NIX_SPI_TO_SA_ADD and NIX_SPI_TO_SA_DELETE mailbox calls respectively.

When the RQs are changed at runtime via ethtool, RVU PF driver frees all
the resources and goes through reinitialization with the new set of receive
queues. As part of this flow, the UCAST_IPSEC NPC rules that were installed
by the RVU PF/VF driver have to be reconfigured with the new RQ index.

So, delete the NPC rules when the interface is stopped via otx2_stop().
When otx2_open() is called, re-install the NPC flow and re-initialize the
SPI-to-SA table for every SA context that was previously installed.

Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- None
 
Changes in V3:
- Updated definitions as reported by kernel test robot.
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202506200609.aTq7YfBa-lkp@intel.com/

Changes in V2:
- Use cpu_to_be32
- Moved code from patch 15/15 in V1 to avoid unusued function warnings
  for the following:

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-14-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-14-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-14-tanmay@marvell.com/

 .../marvell/octeontx2/nic/cn10k_ipsec.c       | 252 +++++++++++++++++-
 .../marvell/octeontx2/nic/cn10k_ipsec.h       |   7 +
 .../ethernet/marvell/octeontx2/nic/otx2_pf.c  |   8 +
 3 files changed, 262 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index 81610774e7b6..60c267128d6b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -396,6 +396,205 @@ struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
 	return wqe;
 }
 
+static int cn10k_inb_alloc_mcam_entry(struct otx2_nic *pfvf,
+				      struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+	struct otx2_flow_config *flow_cfg = pfvf->flow_cfg;
+	struct npc_mcam_alloc_entry_req *mcam_req;
+	struct npc_mcam_alloc_entry_rsp *mcam_rsp;
+	int err = 0;
+
+	if (!pfvf->flow_cfg || !flow_cfg->flow_ent)
+		return -ENODEV;
+
+	mutex_lock(&pfvf->mbox.lock);
+
+	/* Request an MCAM entry to install UCAST_IPSEC rule */
+	mcam_req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(&pfvf->mbox);
+	if (!mcam_req) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	mcam_req->contig = false;
+	mcam_req->count = 1;
+	mcam_req->ref_entry = flow_cfg->flow_ent[0];
+	mcam_req->priority = NPC_MCAM_HIGHER_PRIO;
+
+	if (otx2_sync_mbox_msg(&pfvf->mbox)) {
+		err = -ENODEV;
+		goto out;
+	}
+
+	mcam_rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp(&pfvf->mbox.mbox,
+									0, &mcam_req->hdr);
+
+	/* Store NPC MCAM entry for bookkeeping */
+	inb_ctx_info->npc_mcam_entry = mcam_rsp->entry_list[0];
+
+out:
+	mutex_unlock(&pfvf->mbox.lock);
+	return err;
+}
+
+static int cn10k_inb_install_flow(struct otx2_nic *pfvf,
+				  struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+	struct npc_install_flow_req *req;
+	int err;
+
+	/* Allocate an MCAM entry if not previously allocated */
+	if (!inb_ctx_info->npc_mcam_entry) {
+		err = cn10k_inb_alloc_mcam_entry(pfvf, inb_ctx_info);
+		if (err) {
+			netdev_err(pfvf->netdev,
+				   "Failed to allocate MCAM entry for Inbound IPsec flow\n");
+			goto out;
+		}
+	}
+
+	mutex_lock(&pfvf->mbox.lock);
+
+	req = otx2_mbox_alloc_msg_npc_install_flow(&pfvf->mbox);
+	if (!req) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	req->entry = inb_ctx_info->npc_mcam_entry;
+	req->features |= BIT(NPC_IPPROTO_ESP) | BIT(NPC_IPSEC_SPI);
+	req->intf = NIX_INTF_RX;
+	req->index = pfvf->ipsec.inb_ipsec_rq;
+	req->match_id = 0xfeed;
+	req->channel = pfvf->hw.rx_chan_base;
+	req->op = NIX_RX_ACTIONOP_UCAST_IPSEC;
+	req->set_cntr = 1;
+	req->packet.spi = inb_ctx_info->spi;
+	req->mask.spi = cpu_to_be32(0xffffffff);
+
+	/* Send message to AF */
+	err = otx2_sync_mbox_msg(&pfvf->mbox);
+out:
+	mutex_unlock(&pfvf->mbox.lock);
+	return err;
+}
+
+static int cn10k_inb_delete_flow(struct otx2_nic *pfvf,
+				 struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+	struct npc_delete_flow_req *req;
+	int err = 0;
+
+	mutex_lock(&pfvf->mbox.lock);
+
+	req = otx2_mbox_alloc_msg_npc_delete_flow(&pfvf->mbox);
+	if (!req) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	req->entry = inb_ctx_info->npc_mcam_entry;
+
+	/* Send message to AF */
+	err = otx2_sync_mbox_msg(&pfvf->mbox);
+out:
+	mutex_unlock(&pfvf->mbox.lock);
+	return err;
+}
+
+static int cn10k_inb_ena_dis_flow(struct otx2_nic *pfvf,
+				  struct cn10k_inb_sw_ctx_info *inb_ctx_info,
+				  bool disable)
+{
+	struct npc_mcam_ena_dis_entry_req *req;
+	int err = 0;
+
+	mutex_lock(&pfvf->mbox.lock);
+
+	if (disable)
+		req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(&pfvf->mbox);
+	else
+		req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(&pfvf->mbox);
+	if (!req) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	req->entry = inb_ctx_info->npc_mcam_entry;
+
+	err = otx2_sync_mbox_msg(&pfvf->mbox);
+out:
+	mutex_unlock(&pfvf->mbox.lock);
+	return err;
+}
+
+void cn10k_ipsec_inb_disable_flows(struct otx2_nic *pfvf)
+{
+	struct cn10k_inb_sw_ctx_info *inb_ctx_info;
+
+	list_for_each_entry(inb_ctx_info, &pfvf->ipsec.inb_sw_ctx_list, list) {
+		if (cn10k_inb_ena_dis_flow(pfvf, inb_ctx_info, true)) {
+			netdev_err(pfvf->netdev,
+				   "Failed to disable UCAST_IPSEC entry %d\n",
+				   inb_ctx_info->npc_mcam_entry);
+			continue;
+		}
+		inb_ctx_info->delete_npc_and_match_entry = false;
+	}
+}
+
+static int cn10k_inb_install_spi_to_sa_match_entry(struct otx2_nic *pfvf,
+						   struct xfrm_state *x,
+						   struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+	struct nix_spi_to_sa_add_req *req;
+	struct nix_spi_to_sa_add_rsp *rsp;
+	int err;
+
+	mutex_lock(&pfvf->mbox.lock);
+	req = otx2_mbox_alloc_msg_nix_spi_to_sa_add(&pfvf->mbox);
+	if (!req) {
+		mutex_unlock(&pfvf->mbox.lock);
+		return -ENOMEM;
+	}
+
+	req->sa_index = inb_ctx_info->sa_index;
+	req->spi_index = be32_to_cpu(x->id.spi);
+	req->match_id = 0xfeed;
+	req->valid = 1;
+
+	/* Send message to AF */
+	err = otx2_sync_mbox_msg(&pfvf->mbox);
+
+	rsp = (struct nix_spi_to_sa_add_rsp *)otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+	inb_ctx_info->hash_index = rsp->hash_index;
+	inb_ctx_info->way = rsp->way;
+
+	mutex_unlock(&pfvf->mbox.lock);
+	return err;
+}
+
+static int cn10k_inb_delete_spi_to_sa_match_entry(struct otx2_nic *pfvf,
+						  struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+	struct nix_spi_to_sa_delete_req *req;
+	int err;
+
+	mutex_lock(&pfvf->mbox.lock);
+	req = otx2_mbox_alloc_msg_nix_spi_to_sa_delete(&pfvf->mbox);
+	if (!req) {
+		mutex_unlock(&pfvf->mbox.lock);
+		return -ENOMEM;
+	}
+
+	req->hash_index = inb_ctx_info->hash_index;
+	req->way = inb_ctx_info->way;
+
+	err = otx2_sync_mbox_msg(&pfvf->mbox);
+	mutex_unlock(&pfvf->mbox.lock);
+	return err;
+}
+
 static int cn10k_inb_nix_inline_lf_cfg(struct otx2_nic *pfvf)
 {
 	struct nix_inline_ipsec_lf_cfg *req;
@@ -719,6 +918,7 @@ static irqreturn_t cn10k_ipsec_npa_inb_ipsecq_intr_handler(int irq, void *data)
 static int cn10k_inb_cpt_init(struct net_device *netdev)
 {
 	struct otx2_nic *pfvf = netdev_priv(netdev);
+	struct cn10k_inb_sw_ctx_info *inb_ctx_info;
 	int ret = 0, vec;
 	char *irq_name;
 	u64 val;
@@ -778,6 +978,18 @@ static int cn10k_inb_cpt_init(struct net_device *netdev)
 	else
 		ret = 0;
 
+	/* If the driver has any offloaded inbound SA context(s), re-install the
+	 * associated SPI-to-SA match and NPC rules. This is generally executed
+	 * when the RQs are changed at runtime.
+	 */
+	list_for_each_entry(inb_ctx_info, &pfvf->ipsec.inb_sw_ctx_list, list) {
+		cn10k_inb_ena_dis_flow(pfvf, inb_ctx_info, false);
+		cn10k_inb_install_flow(pfvf, inb_ctx_info);
+		cn10k_inb_install_spi_to_sa_match_entry(pfvf,
+							inb_ctx_info->x_state,
+							inb_ctx_info);
+	}
+
 out:
 	return ret;
 }
@@ -1190,12 +1402,42 @@ static void cn10k_ipsec_sa_wq_handler(struct work_struct *work)
 	struct cn10k_ipsec *ipsec = container_of(work, struct cn10k_ipsec,
 						 sa_work);
 	struct otx2_nic *pf = container_of(ipsec, struct otx2_nic, ipsec);
+	struct cn10k_inb_sw_ctx_info *inb_ctx_info, *tmp;
+	int err;
 
-	/* Disable static branch when no more SA enabled */
-	static_branch_disable(&cn10k_ipsec_sa_enabled);
-	rtnl_lock();
-	netdev_update_features(pf->netdev);
-	rtnl_unlock();
+	list_for_each_entry_safe(inb_ctx_info, tmp, &pf->ipsec.inb_sw_ctx_list,
+				 list) {
+		if (!inb_ctx_info->delete_npc_and_match_entry)
+			continue;
+
+		/* Delete all the associated NPC rules associated */
+		err = cn10k_inb_delete_flow(pf, inb_ctx_info);
+		if (err)
+			netdev_err(pf->netdev,
+				   "Failed to free UCAST_IPSEC entry %d\n",
+				   inb_ctx_info->npc_mcam_entry);
+
+		/* Remove SPI_TO_SA exact match entry */
+		err = cn10k_inb_delete_spi_to_sa_match_entry(pf, inb_ctx_info);
+		if (err)
+			netdev_err(pf->netdev,
+				   "Failed to delete spi_to_sa_match_entry\n");
+
+		inb_ctx_info->delete_npc_and_match_entry = false;
+
+		/* Finally clear the entry from the SA Table and free inb_ctx_info */
+		clear_bit(inb_ctx_info->sa_index, pf->ipsec.inb_sa_table);
+		list_del(&inb_ctx_info->list);
+		devm_kfree(pf->dev, inb_ctx_info);
+	}
+
+	/* Disable static branch when no more SA(s) are enabled */
+	if (list_empty(&pf->ipsec.inb_sw_ctx_list) && !pf->ipsec.outb_sa_count) {
+		static_branch_disable(&cn10k_ipsec_sa_enabled);
+		rtnl_lock();
+		netdev_update_features(pf->netdev);
+		rtnl_unlock();
+	}
 }
 
 void cn10k_ipsec_free_aura_ptrs(struct otx2_nic *pfvf)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 7c1e24e21ea3..154247958d8c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -346,6 +346,7 @@ void cn10k_ipsec_free_aura_ptrs(struct otx2_nic *pfvf);
 struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
 						     struct sk_buff *skb,
 						     dma_addr_t seg_addr);
+void cn10k_ipsec_inb_disable_flows(struct otx2_nic *pfvf);
 #else
 static inline __maybe_unused int cn10k_ipsec_init(struct net_device *netdev)
 {
@@ -389,5 +390,11 @@ struct nix_wqe_rx_s *cn10k_ipsec_process_cpt_metapkt(struct otx2_nic *pfvf,
 {
 	return NULL;
 }
+
+static inline __maybe_unused
+void cn10k_ipsec_inb_delete_flows(struct otx2_nic *pfvf)
+{
+}
+
 #endif
 #endif // CN10K_IPSEC_H
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index d1e77ea7b290..3d7ef46e9a8a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1804,6 +1804,10 @@ void otx2_free_hw_resources(struct otx2_nic *pf)
 	if (!otx2_rep_dev(pf->pdev))
 		cn10k_free_all_ipolicers(pf);
 
+	/* Delete Inbound IPSec flows if any SA's are installed */
+	if (!list_empty(&pf->ipsec.inb_sw_ctx_list))
+		cn10k_ipsec_inb_disable_flows(pf);
+
 	mutex_lock(&mbox->lock);
 	/* Reset NIX LF */
 	free_req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
@@ -2135,6 +2139,10 @@ int otx2_open(struct net_device *netdev)
 
 	otx2_do_set_rx_mode(pf);
 
+	/* Re-initialize IPsec flows if any previously installed */
+	if (!list_empty(&pf->ipsec.inb_sw_ctx_list))
+		cn10k_ipsec_ethtool_init(netdev, true);
+
 	return 0;
 
 err_disable_rxtx:
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH net-next v4 14/14] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows
  2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
                   ` (12 preceding siblings ...)
  2025-08-19  2:15 ` [PATCH net-next v4 13/14] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries Tanmay Jagdale
@ 2025-08-19  2:15 ` Tanmay Jagdale
  13 siblings, 0 replies; 16+ messages in thread
From: Tanmay Jagdale @ 2025-08-19  2:15 UTC (permalink / raw)
  To: davem, leon, horms, sgoutham, bbhushan2
  Cc: linux-crypto, netdev, Tanmay Jagdale

Add XFRM state hook for inbound flows and configure the following:
  - Install an NPC rule to classify the 1st pass IPsec packets and
    direct them to the dedicated RQ
  - Allocate a free entry from the SA table and populate it with the
    SA context details based on xfrm state data.
  - Create a mapping of the SPI value to the SA table index. This is
    used by NIXRX to calculate the exact SA context  pointer address
    based on the SPI in the packet.
  - Prepare the CPT SA context to decrypt buffer in place and the
    write it the CPT hardware via LMT operation.
  - When the XFRM state is deleted, clear this SA in CPT hardware.

Also add XFRM Policy hooks to allow successful offload of inbound
PACKET_MODE.

Signed-off-by: Tanmay Jagdale <tanmay@marvell.com>
---
Changes in V4:
- Fixed warnings reported by sparse tool

Changes in V3:
- Use dma_wmb() instead of arm64 specific dmb(sy)

Changes in V2
- Used reqid to track NPC rule between XFRM state and policy hooks

V1 Link: https://lore.kernel.org/netdev/20250502132005.611698-16-tanmay@marvell.com/
V2 Link: https://lore.kernel.org/netdev/20250618113020.130888-15-tanmay@marvell.com/
V3 Link: https://lore.kernel.org/netdev/20250711121317.340326-15-tanmay@marvell.com/

 .../marvell/octeontx2/nic/cn10k_ipsec.c       | 401 +++++++++++++++++-
 .../marvell/octeontx2/nic/cn10k_ipsec.h       |   1 +
 2 files changed, 379 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
index 60c267128d6b..6036be82fd38 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c
@@ -1026,6 +1026,19 @@ static int cn10k_outb_cpt_clean(struct otx2_nic *pf)
 	return ret;
 }
 
+static u32 cn10k_inb_alloc_sa(struct otx2_nic *pf, struct xfrm_state *x)
+{
+	u32 sa_index = 0;
+
+	sa_index = find_first_zero_bit(pf->ipsec.inb_sa_table, CN10K_IPSEC_INB_MAX_SA);
+	if (sa_index >= CN10K_IPSEC_INB_MAX_SA)
+		return sa_index;
+
+	set_bit(sa_index, pf->ipsec.inb_sa_table);
+
+	return sa_index;
+}
+
 static void cn10k_cpt_inst_flush(struct otx2_nic *pf, struct cpt_inst_s *inst,
 				 u64 size)
 {
@@ -1140,6 +1153,137 @@ static int cn10k_outb_write_sa(struct otx2_nic *pf, struct qmem *sa_info)
 	return ret;
 }
 
+static int cn10k_inb_write_sa(struct otx2_nic *pf,
+			      struct xfrm_state *x,
+			      struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+	dma_addr_t res_iova, dptr_iova, sa_iova;
+	struct cn10k_rx_sa_s *sa_dptr, *sa_cptr;
+	struct cpt_inst_s inst;
+	u32 sa_size, off;
+	struct cpt_res_s *res;
+	u64 reg_val;
+	int ret;
+
+	res = dma_alloc_coherent(pf->dev, sizeof(struct cpt_res_s),
+				 &res_iova, GFP_ATOMIC);
+	if (!res)
+		return -ENOMEM;
+
+	sa_cptr = inb_ctx_info->sa_entry;
+	sa_iova = inb_ctx_info->sa_iova;
+	sa_size = sizeof(struct cn10k_rx_sa_s);
+
+	sa_dptr = dma_alloc_coherent(pf->dev, sa_size, &dptr_iova, GFP_ATOMIC);
+	if (!sa_dptr) {
+		dma_free_coherent(pf->dev, sizeof(struct cpt_res_s), res,
+				  res_iova);
+		return -ENOMEM;
+	}
+
+	for (off = 0; off < (sa_size / 8); off++)
+		*((u64 *)sa_dptr + off) = (__force u64)cpu_to_be64(*((u64 *)sa_cptr + off));
+
+	memset(&inst, 0, sizeof(struct cpt_inst_s));
+
+	res->compcode = 0;
+	inst.res_addr = res_iova;
+	inst.dptr = (u64)dptr_iova;
+	inst.param2 = sa_size >> 3;
+	inst.dlen = sa_size;
+	inst.opcode_major = CN10K_IPSEC_MAJOR_OP_WRITE_SA;
+	inst.opcode_minor = CN10K_IPSEC_MINOR_OP_WRITE_SA;
+	inst.cptr = sa_iova;
+	inst.ctx_val = 1;
+	inst.egrp = CN10K_DEF_CPT_IPSEC_EGRP;
+
+	/* Re-use Outbound CPT LF to install Ingress SAs as well because
+	 * the driver does not own the ingress CPT LF.
+	 */
+	pf->ipsec.io_addr = (__force u64)otx2_get_regaddr(pf, CN10K_CPT_LF_NQX(0));
+	cn10k_cpt_inst_flush(pf, &inst, sizeof(struct cpt_inst_s));
+	dma_wmb();
+
+	ret = cn10k_wait_for_cpt_respose(pf, res);
+	if (ret)
+		goto out;
+
+	/* Trigger CTX flush to write dirty data back to DRAM */
+	reg_val = FIELD_PREP(GENMASK_ULL(45, 0), sa_iova >> 7);
+	otx2_write64(pf, CN10K_CPT_LF_CTX_FLUSH, reg_val);
+
+out:
+	dma_free_coherent(pf->dev, sa_size, sa_dptr, dptr_iova);
+	dma_free_coherent(pf->dev, sizeof(struct cpt_res_s), res, res_iova);
+	return ret;
+}
+
+static void cn10k_xfrm_inb_prepare_sa(struct otx2_nic *pf, struct xfrm_state *x,
+				      struct cn10k_inb_sw_ctx_info *inb_ctx_info)
+{
+	struct cn10k_rx_sa_s *sa_entry = inb_ctx_info->sa_entry;
+	int key_len = (x->aead->alg_key_len + 7) / 8;
+	u8 *key = x->aead->alg_key;
+	u32 sa_size = sizeof(struct cn10k_rx_sa_s);
+	u64 *tmp_key;
+	u32 *tmp_salt;
+	int idx;
+
+	memset(sa_entry, 0, sizeof(struct cn10k_rx_sa_s));
+
+	/* Disable ESN for now */
+	sa_entry->esn_en = 0;
+
+	/* HW context offset is word-31 */
+	sa_entry->hw_ctx_off = 31;
+	sa_entry->pkind = NPC_RX_CPT_HDR_PKIND;
+	sa_entry->eth_ovrwr = 1;
+	sa_entry->pkt_output = 1;
+	sa_entry->pkt_format = 1;
+	sa_entry->orig_pkt_free = 0;
+	/* context push size is up to word 31 */
+	sa_entry->ctx_push_size = 31 + 1;
+	/* context size, 128 Byte aligned up */
+	sa_entry->ctx_size = (sa_size / OTX2_ALIGN)  & 0xF;
+
+	sa_entry->cookie = inb_ctx_info->sa_index;
+
+	/* 1 word (??) prepanded to context header size */
+	sa_entry->ctx_hdr_size = 1;
+	/* Mark SA entry valid */
+	sa_entry->aop_valid = 1;
+
+	sa_entry->sa_dir = 0;			/* Inbound */
+	sa_entry->ipsec_protocol = 1;		/* ESP */
+	/* Default to Transport Mode */
+	if (x->props.mode == XFRM_MODE_TUNNEL)
+		sa_entry->ipsec_mode = 1;	/* Tunnel Mode */
+
+	sa_entry->et_ovrwr_ddr_en = 1;
+	sa_entry->enc_type = 5;			/* AES-GCM only */
+	sa_entry->aes_key_len = 1;		/* AES key length 128 */
+	sa_entry->l2_l3_hdr_on_error = 1;
+	sa_entry->spi = (__force u32)be32_to_cpu(x->id.spi);
+
+	/* Last 4 bytes are salt */
+	key_len -= 4;
+	memcpy(sa_entry->cipher_key, key, key_len);
+	tmp_key = (u64 *)sa_entry->cipher_key;
+
+	for (idx = 0; idx < key_len / 8; idx++)
+		tmp_key[idx] = (__force u64)cpu_to_be64(tmp_key[idx]);
+
+	memcpy(&sa_entry->iv_gcm_salt, key + key_len, 4);
+	tmp_salt = (u32 *)&sa_entry->iv_gcm_salt;
+	*tmp_salt = (__force u32)cpu_to_be32(*tmp_salt);
+
+	/* Write SA context data to memory before enabling */
+	wmb();
+
+	/* Enable SA */
+	sa_entry->sa_valid = 1;
+}
+
 static int cn10k_ipsec_get_hw_ctx_offset(void)
 {
 	/* Offset on Hardware-context offset in word */
@@ -1247,11 +1391,6 @@ static int cn10k_ipsec_validate_state(struct xfrm_state *x,
 				   "Only IPv4/v6 xfrm states may be offloaded");
 		return -EINVAL;
 	}
-	if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
-		NL_SET_ERR_MSG_MOD(extack,
-				   "Cannot offload other than crypto-mode");
-		return -EINVAL;
-	}
 	if (x->props.mode != XFRM_MODE_TRANSPORT &&
 	    x->props.mode != XFRM_MODE_TUNNEL) {
 		NL_SET_ERR_MSG_MOD(extack,
@@ -1263,11 +1402,6 @@ static int cn10k_ipsec_validate_state(struct xfrm_state *x,
 				   "Only ESP xfrm state may be offloaded");
 		return -EINVAL;
 	}
-	if (x->encap) {
-		NL_SET_ERR_MSG_MOD(extack,
-				   "Encapsulated xfrm state may not be offloaded");
-		return -EINVAL;
-	}
 	if (!x->aead) {
 		NL_SET_ERR_MSG_MOD(extack,
 				   "Cannot offload xfrm states without aead");
@@ -1304,11 +1438,95 @@ static int cn10k_ipsec_validate_state(struct xfrm_state *x,
 	return 0;
 }
 
-static int cn10k_ipsec_inb_add_state(struct xfrm_state *x,
+static int cn10k_ipsec_inb_add_state(struct net_device *dev,
+				     struct xfrm_state *x,
 				     struct netlink_ext_ack *extack)
 {
-	NL_SET_ERR_MSG_MOD(extack, "xfrm inbound offload not supported");
-	return -EOPNOTSUPP;
+	struct cn10k_inb_sw_ctx_info *inb_ctx_info = NULL, *inb_ctx;
+	bool enable_rule = false;
+	struct otx2_nic *pf;
+	u64 *sa_offset_ptr;
+	u32 sa_index = 0;
+	int err = 0;
+
+	pf = netdev_priv(dev);
+
+	/* If XFRM policy was added before state, then the inb_ctx_info instance
+	 * would be allocated there.
+	 */
+	list_for_each_entry(inb_ctx, &pf->ipsec.inb_sw_ctx_list, list) {
+		if (inb_ctx->reqid == x->props.reqid) {
+			inb_ctx_info = inb_ctx;
+			enable_rule = true;
+			break;
+		}
+	}
+
+	if (!inb_ctx_info) {
+		/* Allocate a structure to track SA related info in driver */
+		inb_ctx_info = devm_kzalloc(pf->dev, sizeof(*inb_ctx_info), GFP_KERNEL);
+		if (!inb_ctx_info)
+			return -ENOMEM;
+
+		/* Stash pointer in the xfrm offload handle */
+		x->xso.offload_handle = (unsigned long)inb_ctx_info;
+	}
+
+	sa_index = cn10k_inb_alloc_sa(pf, x);
+	if (sa_index >= CN10K_IPSEC_INB_MAX_SA) {
+		netdev_err(dev, "Failed to find free entry in SA Table\n");
+		err = -ENOMEM;
+		goto err_out;
+	}
+
+	/* Fill in information for bookkeeping */
+	inb_ctx_info->sa_index = sa_index;
+	inb_ctx_info->spi = x->id.spi;
+	inb_ctx_info->reqid = x->props.reqid;
+	inb_ctx_info->sa_entry = pf->ipsec.inb_sa->base +
+				 (sa_index * pf->ipsec.sa_tbl_entry_sz);
+	inb_ctx_info->sa_iova = pf->ipsec.inb_sa->iova +
+				(sa_index * pf->ipsec.sa_tbl_entry_sz);
+	inb_ctx_info->x_state = x;
+
+	/* Store XFRM state pointer in SA context at an offset of 1KB.
+	 * It will be later used in the rcv_pkt_handler to associate
+	 * an skb with XFRM state.
+	 */
+	sa_offset_ptr = pf->ipsec.inb_sa->base +
+		 (sa_index * pf->ipsec.sa_tbl_entry_sz) + 1024;
+	*sa_offset_ptr = (u64)x;
+
+	err = cn10k_inb_install_spi_to_sa_match_entry(pf, x, inb_ctx_info);
+	if (err) {
+		netdev_err(dev, "Failed to install Inbound IPSec exact match entry\n");
+		goto err_out;
+	}
+
+	/* Fill the Inbound SA context structure */
+	cn10k_xfrm_inb_prepare_sa(pf, x, inb_ctx_info);
+
+	err = cn10k_inb_write_sa(pf, x, inb_ctx_info);
+	if (err)
+		netdev_err(dev, "Error writing inbound SA\n");
+
+	/* Enable NPC rule if policy was already installed */
+	if (enable_rule) {
+		err = cn10k_inb_ena_dis_flow(pf, inb_ctx_info, false);
+		if (err)
+			netdev_err(dev, "Failed to enable rule\n");
+	} else {
+		/* All set, add ctx_info to the list */
+		list_add_tail(&inb_ctx_info->list, &pf->ipsec.inb_sw_ctx_list);
+	}
+
+	cn10k_cpt_device_set_available(pf);
+	return err;
+
+err_out:
+	x->xso.offload_handle = 0;
+	devm_kfree(pf->dev, inb_ctx_info);
+	return err;
 }
 
 static int cn10k_ipsec_outb_add_state(struct net_device *dev,
@@ -1320,10 +1538,6 @@ static int cn10k_ipsec_outb_add_state(struct net_device *dev,
 	struct otx2_nic *pf;
 	int err;
 
-	err = cn10k_ipsec_validate_state(x, extack);
-	if (err)
-		return err;
-
 	pf = netdev_priv(dev);
 
 	err = qmem_alloc(pf->dev, &sa_info, pf->ipsec.sa_size, OTX2_ALIGN);
@@ -1352,10 +1566,52 @@ static int cn10k_ipsec_add_state(struct net_device *dev,
 				 struct xfrm_state *x,
 				 struct netlink_ext_ack *extack)
 {
+	int err;
+
+	err = cn10k_ipsec_validate_state(x, extack);
+	if (err)
+		return err;
+
 	if (x->xso.dir == XFRM_DEV_OFFLOAD_IN)
-		return cn10k_ipsec_inb_add_state(x, extack);
+		return cn10k_ipsec_inb_add_state(dev, x, extack);
 	else
 		return cn10k_ipsec_outb_add_state(dev, x, extack);
+
+	return err;
+}
+
+static void cn10k_ipsec_inb_del_state(struct net_device *dev,
+				      struct otx2_nic *pf, struct xfrm_state *x)
+{
+	struct cn10k_inb_sw_ctx_info *inb_ctx_info;
+	struct cn10k_rx_sa_s *sa_entry;
+	int err = 0;
+
+	/* 1. Find SPI to SA entry */
+	inb_ctx_info = (struct cn10k_inb_sw_ctx_info *)x->xso.offload_handle;
+
+	if (inb_ctx_info->spi != x->id.spi) {
+		netdev_err(dev, "SPI Mismatch (ctx) 0x%x != 0x%x (xfrm)\n",
+			   inb_ctx_info->spi, be32_to_cpu(x->id.spi));
+		return;
+	}
+
+	/* 2. Delete SA in CPT HW */
+	sa_entry = inb_ctx_info->sa_entry;
+	memset(sa_entry, 0, sizeof(struct cn10k_rx_sa_s));
+
+	sa_entry->ctx_push_size = 31 + 1;
+	sa_entry->ctx_size = (sizeof(struct cn10k_rx_sa_s) / OTX2_ALIGN) & 0xF;
+	sa_entry->aop_valid = 1;
+
+	if (cn10k_cpt_device_set_inuse(pf)) {
+		err = cn10k_inb_write_sa(pf, x, inb_ctx_info);
+		if (err)
+			netdev_err(dev, "Error (%d) deleting INB SA\n", err);
+		cn10k_cpt_device_set_available(pf);
+	}
+
+	x->xso.offload_handle = 0;
 }
 
 static void cn10k_ipsec_del_state(struct net_device *dev, struct xfrm_state *x)
@@ -1365,11 +1621,11 @@ static void cn10k_ipsec_del_state(struct net_device *dev, struct xfrm_state *x)
 	struct otx2_nic *pf;
 	int err;
 
-	if (x->xso.dir == XFRM_DEV_OFFLOAD_IN)
-		return;
-
 	pf = netdev_priv(dev);
 
+	if (x->xso.dir == XFRM_DEV_OFFLOAD_IN)
+		return cn10k_ipsec_inb_del_state(dev, pf, x);
+
 	sa_info = (struct qmem *)x->xso.offload_handle;
 	sa_entry = (struct cn10k_tx_sa_s *)sa_info->base;
 	memset(sa_entry, 0, sizeof(struct cn10k_tx_sa_s));
@@ -1388,13 +1644,112 @@ static void cn10k_ipsec_del_state(struct net_device *dev, struct xfrm_state *x)
 	/* If no more SA's then update netdev feature for potential change
 	 * in NETIF_F_HW_ESP.
 	 */
-	if (!--pf->ipsec.outb_sa_count)
-		queue_work(pf->ipsec.sa_workq, &pf->ipsec.sa_work);
+	pf->ipsec.outb_sa_count--;
+	queue_work(pf->ipsec.sa_workq, &pf->ipsec.sa_work);
+}
+
+static int cn10k_ipsec_policy_add(struct xfrm_policy *x,
+				  struct netlink_ext_ack *extack)
+{
+	struct cn10k_inb_sw_ctx_info *inb_ctx_info = NULL, *inb_ctx;
+	struct net_device *netdev = x->xdo.dev;
+	struct otx2_nic *pf;
+	int ret = 0;
+	bool disable_rule = true;
+
+	if (x->xdo.dir != XFRM_DEV_OFFLOAD_IN) {
+		netdev_err(netdev, "ERR: Can only offload Inbound policies\n");
+		ret = -EINVAL;
+	}
+
+	if (x->xdo.type != XFRM_DEV_OFFLOAD_PACKET) {
+		netdev_err(netdev, "ERR: Only Packet mode supported\n");
+		ret = -EINVAL;
+	}
+
+	pf = netdev_priv(netdev);
+
+	/* If XFRM state was added before policy, then the inb_ctx_info instance
+	 * would be allocated there.
+	 */
+	list_for_each_entry(inb_ctx, &pf->ipsec.inb_sw_ctx_list, list) {
+		if (inb_ctx->reqid == x->xfrm_vec[0].reqid) {
+			inb_ctx_info = inb_ctx;
+			disable_rule = false;
+			break;
+		}
+	}
+
+	if (!inb_ctx_info) {
+		/* Allocate a structure to track SA related info in driver */
+		inb_ctx_info = devm_kzalloc(pf->dev, sizeof(*inb_ctx_info), GFP_KERNEL);
+		if (!inb_ctx_info)
+			return -ENOMEM;
+
+		inb_ctx_info->reqid = x->xfrm_vec[0].reqid;
+	}
+
+	ret = cn10k_inb_alloc_mcam_entry(pf, inb_ctx_info);
+	if (ret) {
+		netdev_err(netdev, "Failed to allocate MCAM entry for Inbound IPSec flow\n");
+		goto err_out;
+	}
+
+	ret = cn10k_inb_install_flow(pf, inb_ctx_info);
+	if (ret) {
+		netdev_err(netdev, "Failed to install Inbound IPSec flow\n");
+		goto err_out;
+	}
+
+	/* Leave rule in a disabled state until xfrm_state add is completed */
+	if (disable_rule) {
+		ret = cn10k_inb_ena_dis_flow(pf, inb_ctx_info, true);
+		if (ret)
+			netdev_err(netdev, "Failed to disable rule\n");
+
+		/* All set, add ctx_info to the list */
+		list_add_tail(&inb_ctx_info->list, &pf->ipsec.inb_sw_ctx_list);
+	}
+
+	/* Stash pointer in the xfrm offload handle */
+	x->xdo.offload_handle = (unsigned long)inb_ctx_info;
+
+err_out:
+	return ret;
+}
+
+static void cn10k_ipsec_policy_delete(struct xfrm_policy *x)
+{
+	struct cn10k_inb_sw_ctx_info *inb_ctx_info;
+	struct net_device *netdev = x->xdo.dev;
+	struct otx2_nic *pf;
+
+	if (!x->xdo.offload_handle)
+		return;
+
+	pf = netdev_priv(netdev);
+	inb_ctx_info = (struct cn10k_inb_sw_ctx_info *)x->xdo.offload_handle;
+
+	/* Schedule a workqueue to free NPC rule and SPI-to-SA match table
+	 * entry because they are freed via a mailbox call which can sleep
+	 * and the delete policy routine from XFRM stack is called in an
+	 * atomic context.
+	 */
+	inb_ctx_info->delete_npc_and_match_entry = true;
+	queue_work(pf->ipsec.sa_workq, &pf->ipsec.sa_work);
+}
+
+static void cn10k_ipsec_policy_free(struct xfrm_policy *x)
+{
+	return;
 }
 
 static const struct xfrmdev_ops cn10k_ipsec_xfrmdev_ops = {
 	.xdo_dev_state_add	= cn10k_ipsec_add_state,
 	.xdo_dev_state_delete	= cn10k_ipsec_del_state,
+	.xdo_dev_policy_add	= cn10k_ipsec_policy_add,
+	.xdo_dev_policy_delete	= cn10k_ipsec_policy_delete,
+	.xdo_dev_policy_free	= cn10k_ipsec_policy_free,
 };
 
 static void cn10k_ipsec_sa_wq_handler(struct work_struct *work)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
index 154247958d8c..662f6ba5e0a3 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.h
@@ -94,6 +94,7 @@ struct cn10k_inb_sw_ctx_info {
 	u32 npc_mcam_entry;
 	u32 sa_index;
 	__be32 spi;
+	u32 reqid;
 	u16 hash_index;	/* Hash index from SPI_TO_SA match */
 	u8 way;		/* SPI_TO_SA match table way index */
 	bool delete_npc_and_match_entry;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH net-next v4 13/14] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries
  2025-08-19  2:15 ` [PATCH net-next v4 13/14] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries Tanmay Jagdale
@ 2025-08-19 16:10   ` kernel test robot
  0 siblings, 0 replies; 16+ messages in thread
From: kernel test robot @ 2025-08-19 16:10 UTC (permalink / raw)
  To: Tanmay Jagdale, davem, leon, horms, sgoutham, bbhushan2
  Cc: oe-kbuild-all, linux-crypto, netdev, Tanmay Jagdale

Hi Tanmay,

kernel test robot noticed the following build errors:

[auto build test ERROR on net-next/main]

url:    https://github.com/intel-lab-lkp/linux/commits/Tanmay-Jagdale/crypto-octeontx2-Share-engine-group-info-with-AF-driver/20250819-103300
base:   net-next/main
patch link:    https://lore.kernel.org/r/20250819021507.323752-14-tanmay%40marvell.com
patch subject: [PATCH net-next v4 13/14] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries
config: sparc64-randconfig-002-20250819 (https://download.01.org/0day-ci/archive/20250820/202508200056.n3gONjJD-lkp@intel.com/config)
compiler: sparc64-linux-gcc (GCC) 8.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250820/202508200056.n3gONjJD-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202508200056.n3gONjJD-lkp@intel.com/

All errors (new ones prefixed by >>):

   drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c: In function 'otx2_free_hw_resources':
>> drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c:1809:3: error: implicit declaration of function 'cn10k_ipsec_inb_disable_flows'; did you mean 'cn10k_ipsec_inb_delete_flows'? [-Werror=implicit-function-declaration]
      cn10k_ipsec_inb_disable_flows(pf);
      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      cn10k_ipsec_inb_delete_flows
   cc1: some warnings being treated as errors


vim +1809 drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c

  1772	
  1773		if (!otx2_rep_dev(pf->pdev))
  1774			otx2_clean_qos_queues(pf);
  1775	
  1776		mutex_lock(&mbox->lock);
  1777		/* Disable backpressure */
  1778		if (!is_otx2_lbkvf(pf->pdev))
  1779			otx2_nix_config_bp(pf, false);
  1780		mutex_unlock(&mbox->lock);
  1781	
  1782		/* Disable RQs */
  1783		otx2_ctx_disable(mbox, NIX_AQ_CTYPE_RQ, false);
  1784	
  1785		/*Dequeue all CQEs */
  1786		for (qidx = 0; qidx < qset->cq_cnt; qidx++) {
  1787			cq = &qset->cq[qidx];
  1788			if (cq->cq_type == CQ_RX)
  1789				otx2_cleanup_rx_cqes(pf, cq, qidx);
  1790			else
  1791				otx2_cleanup_tx_cqes(pf, cq);
  1792		}
  1793		otx2_free_pending_sqe(pf);
  1794	
  1795		otx2_free_sq_res(pf);
  1796	
  1797		/* Free RQ buffer pointers*/
  1798		otx2_free_aura_ptr(pf, AURA_NIX_RQ);
  1799		cn10k_ipsec_free_aura_ptrs(pf);
  1800	
  1801		otx2_free_cq_res(pf);
  1802	
  1803		/* Free all ingress bandwidth profiles allocated */
  1804		if (!otx2_rep_dev(pf->pdev))
  1805			cn10k_free_all_ipolicers(pf);
  1806	
  1807		/* Delete Inbound IPSec flows if any SA's are installed */
  1808		if (!list_empty(&pf->ipsec.inb_sw_ctx_list))
> 1809			cn10k_ipsec_inb_disable_flows(pf);
  1810	
  1811		mutex_lock(&mbox->lock);
  1812		/* Reset NIX LF */
  1813		free_req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
  1814		if (free_req) {
  1815			free_req->flags = NIX_LF_DISABLE_FLOWS;
  1816			if (!(pf->flags & OTX2_FLAG_PF_SHUTDOWN))
  1817				free_req->flags |= NIX_LF_DONT_FREE_TX_VTAG;
  1818			if (otx2_sync_mbox_msg(mbox))
  1819				dev_err(pf->dev, "%s failed to free nixlf\n", __func__);
  1820		}
  1821		mutex_unlock(&mbox->lock);
  1822	
  1823		/* Disable NPA Pool and Aura hw context */
  1824		otx2_ctx_disable(mbox, NPA_AQ_CTYPE_POOL, true);
  1825		otx2_ctx_disable(mbox, NPA_AQ_CTYPE_AURA, true);
  1826		otx2_aura_pool_free(pf);
  1827	
  1828		mutex_lock(&mbox->lock);
  1829		/* Reset NPA LF */
  1830		req = otx2_mbox_alloc_msg_npa_lf_free(mbox);
  1831		if (req) {
  1832			if (otx2_sync_mbox_msg(mbox))
  1833				dev_err(pf->dev, "%s failed to free npalf\n", __func__);
  1834		}
  1835		mutex_unlock(&mbox->lock);
  1836	}
  1837	EXPORT_SYMBOL(otx2_free_hw_resources);
  1838	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2025-08-19 16:11 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-19  2:14 [PATCH net-next v4 00/14] Enable Inbound IPsec offload on Marvell CN10K SoC Tanmay Jagdale
2025-08-19  2:14 ` [PATCH net-next v4 01/14] crypto: octeontx2: Share engine group info with AF driver Tanmay Jagdale
2025-08-19  2:14 ` [PATCH net-next v4 02/14] octeontx2-af: Configure crypto hardware for inline ipsec Tanmay Jagdale
2025-08-19  2:14 ` [PATCH net-next v4 03/14] octeontx2-af: Setup Large Memory Transaction for crypto Tanmay Jagdale
2025-08-19  2:14 ` [PATCH net-next v4 04/14] octeontx2-af: Handle inbound inline ipsec config in AF Tanmay Jagdale
2025-08-19  2:14 ` [PATCH net-next v4 05/14] octeontx2-af: Add support for CPT second pass Tanmay Jagdale
2025-08-19  2:14 ` [PATCH net-next v4 06/14] octeontx2-af: Add support for SPI to SA index translation Tanmay Jagdale
2025-08-19  2:14 ` [PATCH net-next v4 07/14] octeontx2-af: Add mbox to alloc/free BPIDs Tanmay Jagdale
2025-08-19  2:14 ` [PATCH net-next v4 08/14] octeontx2-pf: ipsec: Allocate Ingress SA table Tanmay Jagdale
2025-08-19  2:15 ` [PATCH net-next v4 09/14] octeontx2-pf: ipsec: Setup NIX HW resources for inbound flows Tanmay Jagdale
2025-08-19  2:15 ` [PATCH net-next v4 10/14] octeontx2-pf: ipsec: Handle NPA threshold interrupt Tanmay Jagdale
2025-08-19  2:15 ` [PATCH net-next v4 11/14] octeontx2-pf: ipsec: Initialize ingress IPsec Tanmay Jagdale
2025-08-19  2:15 ` [PATCH net-next v4 12/14] octeontx2-pf: ipsec: Process CPT metapackets Tanmay Jagdale
2025-08-19  2:15 ` [PATCH net-next v4 13/14] octeontx2-pf: ipsec: Manage NPC rules and SPI-to-SA table entries Tanmay Jagdale
2025-08-19 16:10   ` kernel test robot
2025-08-19  2:15 ` [PATCH net-next v4 14/14] octeontx2-pf: ipsec: Add XFRM state and policy hooks for inbound flows Tanmay Jagdale

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).