From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com [209.85.219.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85FA0322DBF for ; Thu, 28 Aug 2025 16:30:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.174 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756398618; cv=none; b=ooBYdmx3L+JVy01eK/c4fBf8UbXWb75+VGH838fTcxXoiDoQb6hpB1Xj0kgUkgtVzumfVn4W3d8qCfXzAolXoQ3ge2wkYQ5B3wUgvXrUhQkDLSTRRDY9pFM7VgT3OKtFWg8GqJt7r6kzNar6WOpEqEzDxcPZ+icM2OjaZfQ1YhM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756398618; c=relaxed/simple; bh=EJS6ikbti75prjeMEc/91SvCXgXQWBjoSEd3sJlvV+k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RvSiO3maJNJ55VQUQC1HYA5xfzye62e0ib7PFx47+F1o8nE4k7jqX19vc+QMcqGmUUnMVqHVERrwiaQDy1A/79pFr74IS0X7JDhliieCQPgJ7x3+pskzIBLfSXuRKhz5j7cV0A41lbN0SPQd7HlwlljB6a8w1h0ubijwcgPq/aU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=bpCobbLZ; arc=none smtp.client-ip=209.85.219.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bpCobbLZ" Received: by mail-yb1-f174.google.com with SMTP id 3f1490d57ef6-e970599004aso484821276.0 for ; Thu, 28 Aug 2025 09:30:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1756398615; x=1757003415; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=R9jNdL/7hdgtrskVMtxR4LIdKNqhACaHmCzDzLRxq08=; b=bpCobbLZftgb0VOyN7siv6ObTsbQM8YLxgrJy0JwOjxFDQBzhOVXEJgIAVOOisnuAI c+134QXpeqlQ+bZEyMhKW3xA1eGU8In9JcPZ0HpAUsqUqwYn/XdOwmcCBntAchq0KoMQ 1ZN5PHqVi/RArQrx8JcveKanbz4xshCLnQexU86qrTybQFaPzIfJ0aiRDBbUxoZSB+qe +1ieBoxFLPWEe1ptxOLHZA/dSaESIRYx0fqVwZMcut2eElYEhWQTiYrED4O4jXBQtyvS kACCupd8GnUtRmfn77uNwOpK/FgdNrcEjX87/wtZdmpqSjU6S+GWTFJHTvpqkDVOETOI Xtuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1756398615; x=1757003415; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R9jNdL/7hdgtrskVMtxR4LIdKNqhACaHmCzDzLRxq08=; b=nSdH4uIGUabC8J3zcqYvtgbw7KJOBpYMcNKOzJ4IPjPVnYJT78JXwY9K4blhih54BE dGFeJ+oTFIqneQd7CynAsd03NTebbxt6/QKJICIV2L5jiEBIpnOwhRxhT8hOiQUgEsbU 0BD2EZIlDtjVRLCrk3dmtrewRyk/jdgldIIQl6m59Z34u86iKDfmu5ofYRRe9SoChjdF 0l15TDww3apWkW1lhYYhHk3jYDT1dMfRxf920Y7VmzUOLLP96trCcoIgIMH5Orxc1xCB MwOBkzrQUroP6yp5Bw9P6XCWNEPqoWb5LtRuVlaZwEifBeVVGMWr4mQu/7EqolDoQ+7X jKFw== X-Forwarded-Encrypted: i=1; AJvYcCXXesbEGTCOO/diR7lUYEUWO4T14jSOLr3HYocUOOoQ6Cp2yw5Ofb9UyAfQxS6iEQZM5XOfLfQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxbprDnZ0i2au+3Ok8ls+C+EovbIVCC3ZBmo20mhA5ZSROBjqmn WXPRBoN9oULbXzbm2dghPKKT8cHgTxet0SDkleVf/eUinz/27Q8bkVKD X-Gm-Gg: ASbGncv3OEjippr2UCv8+oqo/12PvoVosgT8H96XFEN1rEKfpmQ3G9QBSJw4t0TpCmM gNZCJfzvKAPOsnKnIXceEYnLNUas6clC1hE+UlryvvW9pPUfl7h04sO4o+19i/ZFITCoXLv6Siz VhWxcaJOktCi3cgUa8p+qNVwmYVk4xRXYzMS+75F+1q4FeEPnT1v4LvGP/PT2K98ZdbHyQrXZxQ A643sLmmYW8TYqUw4US7loJ6ltm8a3ok7JlLugFImtKAoRfKzBeDnlnO8hqXV5g02D6HlQM1wta zQoIB6W7a0IxxL1ImhS9ZJe1KKhoKFDJDOEWKYBXL1hc6Yzgi6bPKbTwaF/QJ8RjuPmrKAZbEut r/w0tHjibFepqY+BC X-Google-Smtp-Source: AGHT+IEN/+xLVt9bxK7wVnTduwW7GDYMSeeFMf9AVxHxiD2HVEcy+xLZ46KBmNo3ryLe28/Kq1TUtw== X-Received: by 2002:a05:6902:20c2:b0:e93:88e2:ef11 with SMTP id 3f1490d57ef6-e951c25a8f4mr21403317276.7.1756398613594; Thu, 28 Aug 2025 09:30:13 -0700 (PDT) Received: from localhost ([2a03:2880:25ff::]) by smtp.gmail.com with ESMTPSA id 3f1490d57ef6-e96fef2009asm912024276.15.2025.08.28.09.30.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Aug 2025 09:30:13 -0700 (PDT) From: Daniel Zahka To: Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman , Jonathan Corbet , Andrew Lunn Cc: Saeed Mahameed , Leon Romanovsky , Tariq Toukan , Boris Pismenny , Kuniyuki Iwashima , Willem de Bruijn , David Ahern , Neal Cardwell , Patrisious Haddad , Raed Salem , Jianbo Liu , Dragos Tatulea , Rahul Rameshbabu , Stanislav Fomichev , =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , Alexander Lobakin , Kiran Kella , Jacob Keller , netdev@vger.kernel.org Subject: [PATCH net-next v10 15/19] net/mlx5e: Add PSP steering in local NIC RX Date: Thu, 28 Aug 2025 09:29:41 -0700 Message-ID: <20250828162953.2707727-16-daniel.zahka@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20250828162953.2707727-1-daniel.zahka@gmail.com> References: <20250828162953.2707727-1-daniel.zahka@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Raed Salem Introduce decrypt FT, the RX error FT, and the default rules. The PSP (PSP) RX decrypt flow table is pointed by the TTC (Traffic Type Classifier) UDP steering rules. The decrypt flow table has two flow groups. The first flow group keeps the decrypt steering rule programmed always when PSP packet is recognized using the dedicated udp destination port number 1000, if packet is decrypted then a PSP marker is set in metadata_regB[30]. The second flow group has a default rule to forward all non-offloaded PSP packet to the TTC UDP default RSS TIR. The RX error flow table is the destination of the decrypt steering rules in the PSP RX decrypt flow table. It has two fixed rule one with single copy action that copies psp_syndrome to metadata_regB[23:29]. The PSP marker and syndrome is used to filter out non-psp packet and to return the PSP crypto offload status in Rx flow. The marker is used to identify such packet in driver so the driver could set SKB PSP metadata. The destination of RX error flow table is the TTC UDP default RSS TIR. The second rule will drop packets that failed to be decrypted (like in case illegal SPI or expired SPI is used). Signed-off-by: Raed Salem Signed-off-by: Rahul Rameshbabu Signed-off-by: Cosmin Ratiu Signed-off-by: Daniel Zahka --- Notes: v6: - move call to mlx5e_fs_get_ttc() to after null check of fs v1: - https://lore.kernel.org/netdev/20240510030435.120935-13-kuba@kernel.org/ .../net/ethernet/mellanox/mlx5/core/en/fs.h | 2 +- .../mellanox/mlx5/core/en_accel/psp_fs.c | 482 +++++++++++++++++- 2 files changed, 477 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h index 9560fcba643f..85a53e8bcbc7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h @@ -88,7 +88,7 @@ enum { #ifdef CONFIG_MLX5_EN_ARFS MLX5E_ARFS_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1, #endif -#ifdef CONFIG_MLX5_EN_IPSEC +#if defined(CONFIG_MLX5_EN_IPSEC) || defined(CONFIG_MLX5_EN_PSP) MLX5E_ACCEL_FS_ESP_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1, MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL, MLX5E_ACCEL_FS_POL_FT_LEVEL, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/psp_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/psp_fs.c index cabbc8f0d84a..22809fbc5b43 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/psp_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/psp_fs.c @@ -8,6 +8,12 @@ #include "en_accel/psp_fs.h" #include "en_accel/psp.h" +enum accel_fs_psp_type { + ACCEL_FS_PSP4, + ACCEL_FS_PSP6, + ACCEL_FS_PSP_NUM_TYPES, +}; + struct mlx5e_psp_tx { struct mlx5_flow_namespace *ns; struct mlx5_flow_table *ft; @@ -17,14 +23,15 @@ struct mlx5e_psp_tx { u32 refcnt; }; -struct mlx5e_psp_fs { - struct mlx5_core_dev *mdev; - struct mlx5e_psp_tx *tx_fs; - struct mlx5e_flow_steering *fs; -}; - enum accel_psp_rule_action { ACCEL_PSP_RULE_ACTION_ENCRYPT, + ACCEL_PSP_RULE_ACTION_DECRYPT, +}; + +enum accel_psp_syndrome { + PSP_OK = 0, + PSP_ICV_FAIL, + PSP_BAD_TRAILER, }; struct mlx5e_accel_psp_rule { @@ -32,6 +39,216 @@ struct mlx5e_accel_psp_rule { u8 action; }; +struct mlx5e_psp_rx_err { + struct mlx5_flow_table *ft; + struct mlx5_flow_handle *rule; + struct mlx5_flow_handle *drop_rule; + struct mlx5_modify_hdr *copy_modify_hdr; +}; + +struct mlx5e_accel_fs_psp_prot { + struct mlx5_flow_table *ft; + struct mlx5_flow_group *miss_group; + struct mlx5_flow_handle *miss_rule; + struct mlx5_flow_destination default_dest; + struct mlx5e_psp_rx_err rx_err; + u32 refcnt; + struct mutex prot_mutex; /* protect ESP4/ESP6 protocol */ + struct mlx5_flow_handle *def_rule; +}; + +struct mlx5e_accel_fs_psp { + struct mlx5e_accel_fs_psp_prot fs_prot[ACCEL_FS_PSP_NUM_TYPES]; +}; + +struct mlx5e_psp_fs { + struct mlx5_core_dev *mdev; + struct mlx5e_psp_tx *tx_fs; + /* Rx manage */ + struct mlx5e_flow_steering *fs; + struct mlx5e_accel_fs_psp *rx_fs; +}; + +/* PSP RX flow steering */ +static enum mlx5_traffic_types fs_psp2tt(enum accel_fs_psp_type i) +{ + if (i == ACCEL_FS_PSP4) + return MLX5_TT_IPV4_UDP; + + return MLX5_TT_IPV6_UDP; +} + +static void accel_psp_fs_rx_err_del_rules(struct mlx5e_psp_fs *fs, + struct mlx5e_psp_rx_err *rx_err) +{ + if (rx_err->drop_rule) { + mlx5_del_flow_rules(rx_err->drop_rule); + rx_err->drop_rule = NULL; + } + + if (rx_err->rule) { + mlx5_del_flow_rules(rx_err->rule); + rx_err->rule = NULL; + } + + if (rx_err->copy_modify_hdr) { + mlx5_modify_header_dealloc(fs->mdev, rx_err->copy_modify_hdr); + rx_err->copy_modify_hdr = NULL; + } +} + +static void accel_psp_fs_rx_err_destroy_ft(struct mlx5e_psp_fs *fs, + struct mlx5e_psp_rx_err *rx_err) +{ + accel_psp_fs_rx_err_del_rules(fs, rx_err); + + if (rx_err->ft) { + mlx5_destroy_flow_table(rx_err->ft); + rx_err->ft = NULL; + } +} + +static void accel_psp_setup_syndrome_match(struct mlx5_flow_spec *spec, + enum accel_psp_syndrome syndrome) +{ + void *misc_params_2; + + spec->match_criteria_enable |= MLX5_MATCH_MISC_PARAMETERS_2; + misc_params_2 = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters_2); + MLX5_SET_TO_ONES(fte_match_set_misc2, misc_params_2, psp_syndrome); + misc_params_2 = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters_2); + MLX5_SET(fte_match_set_misc2, misc_params_2, psp_syndrome, syndrome); +} + +static int accel_psp_fs_rx_err_add_rule(struct mlx5e_psp_fs *fs, + struct mlx5e_accel_fs_psp_prot *fs_prot, + struct mlx5e_psp_rx_err *rx_err) +{ + u8 action[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; + struct mlx5_core_dev *mdev = fs->mdev; + struct mlx5_flow_act flow_act = {}; + struct mlx5_modify_hdr *modify_hdr; + struct mlx5_flow_handle *fte; + struct mlx5_flow_spec *spec; + int err = 0; + + spec = kzalloc(sizeof(*spec), GFP_KERNEL); + if (!spec) + return -ENOMEM; + + /* Action to copy 7 bit psp_syndrome to regB[23:29] */ + MLX5_SET(copy_action_in, action, action_type, MLX5_ACTION_TYPE_COPY); + MLX5_SET(copy_action_in, action, src_field, MLX5_ACTION_IN_FIELD_PSP_SYNDROME); + MLX5_SET(copy_action_in, action, src_offset, 0); + MLX5_SET(copy_action_in, action, length, 7); + MLX5_SET(copy_action_in, action, dst_field, MLX5_ACTION_IN_FIELD_METADATA_REG_B); + MLX5_SET(copy_action_in, action, dst_offset, 23); + + modify_hdr = mlx5_modify_header_alloc(mdev, MLX5_FLOW_NAMESPACE_KERNEL, + 1, action); + if (IS_ERR(modify_hdr)) { + err = PTR_ERR(modify_hdr); + mlx5_core_err(mdev, + "fail to alloc psp copy modify_header_id err=%d\n", err); + goto out_spec; + } + + accel_psp_setup_syndrome_match(spec, PSP_OK); + /* create fte */ + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_MOD_HDR | + MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; + flow_act.modify_hdr = modify_hdr; + fte = mlx5_add_flow_rules(rx_err->ft, spec, &flow_act, + &fs_prot->default_dest, 1); + if (IS_ERR(fte)) { + err = PTR_ERR(fte); + mlx5_core_err(mdev, "fail to add psp rx err copy rule err=%d\n", err); + goto out; + } + rx_err->rule = fte; + + /* add default drop rule */ + memset(spec, 0, sizeof(*spec)); + memset(&flow_act, 0, sizeof(flow_act)); + /* create fte */ + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_DROP; + fte = mlx5_add_flow_rules(rx_err->ft, spec, &flow_act, NULL, 0); + if (IS_ERR(fte)) { + err = PTR_ERR(fte); + mlx5_core_err(mdev, "fail to add psp rx err drop rule err=%d\n", err); + goto out_drop_rule; + } + rx_err->drop_rule = fte; + rx_err->copy_modify_hdr = modify_hdr; + + goto out_spec; + +out_drop_rule: + mlx5_del_flow_rules(rx_err->rule); + rx_err->rule = NULL; +out: + mlx5_modify_header_dealloc(mdev, modify_hdr); +out_spec: + kfree(spec); + return err; +} + +static int accel_psp_fs_rx_err_create_ft(struct mlx5e_psp_fs *fs, + struct mlx5e_accel_fs_psp_prot *fs_prot, + struct mlx5e_psp_rx_err *rx_err) +{ + struct mlx5_flow_namespace *ns = mlx5e_fs_get_ns(fs->fs, false); + struct mlx5_flow_table_attr ft_attr = {}; + struct mlx5_flow_table *ft; + int err; + + ft_attr.max_fte = 2; + ft_attr.autogroup.max_num_groups = 2; + ft_attr.level = MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL; // MLX5E_ACCEL_FS_TCP_FT_LEVEL + ft_attr.prio = MLX5E_NIC_PRIO; + ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr); + if (IS_ERR(ft)) { + err = PTR_ERR(ft); + mlx5_core_err(fs->mdev, "fail to create psp rx inline ft err=%d\n", err); + return err; + } + + rx_err->ft = ft; + err = accel_psp_fs_rx_err_add_rule(fs, fs_prot, rx_err); + if (err) + goto out_err; + + return 0; + +out_err: + mlx5_destroy_flow_table(ft); + rx_err->ft = NULL; + return err; +} + +static void accel_psp_fs_rx_fs_destroy(struct mlx5e_accel_fs_psp_prot *fs_prot) +{ + if (fs_prot->def_rule) { + mlx5_del_flow_rules(fs_prot->def_rule); + fs_prot->def_rule = NULL; + } + + if (fs_prot->miss_rule) { + mlx5_del_flow_rules(fs_prot->miss_rule); + fs_prot->miss_rule = NULL; + } + + if (fs_prot->miss_group) { + mlx5_destroy_flow_group(fs_prot->miss_group); + fs_prot->miss_group = NULL; + } + + if (fs_prot->ft) { + mlx5_destroy_flow_table(fs_prot->ft); + fs_prot->ft = NULL; + } +} + static void setup_fte_udp_psp(struct mlx5_flow_spec *spec, u16 udp_port) { spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS; @@ -41,6 +258,252 @@ static void setup_fte_udp_psp(struct mlx5_flow_spec *spec, u16 udp_port) MLX5_SET(fte_match_set_lyr_2_4, spec->match_value, ip_protocol, IPPROTO_UDP); } +static int accel_psp_fs_rx_create_ft(struct mlx5e_psp_fs *fs, + struct mlx5e_accel_fs_psp_prot *fs_prot) +{ + struct mlx5_flow_namespace *ns = mlx5e_fs_get_ns(fs->fs, false); + u8 action[MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)] = {}; + int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); + struct mlx5_modify_hdr *modify_hdr = NULL; + struct mlx5_flow_table_attr ft_attr = {}; + struct mlx5_flow_destination dest = {}; + struct mlx5_core_dev *mdev = fs->mdev; + struct mlx5_flow_group *miss_group; + MLX5_DECLARE_FLOW_ACT(flow_act); + struct mlx5_flow_handle *rule; + struct mlx5_flow_spec *spec; + struct mlx5_flow_table *ft; + u32 *flow_group_in; + int err = 0; + + flow_group_in = kvzalloc(inlen, GFP_KERNEL); + spec = kvzalloc(sizeof(*spec), GFP_KERNEL); + if (!flow_group_in || !spec) { + err = -ENOMEM; + goto out; + } + + /* Create FT */ + ft_attr.max_fte = 2; + ft_attr.level = MLX5E_ACCEL_FS_ESP_FT_LEVEL; + ft_attr.prio = MLX5E_NIC_PRIO; + ft_attr.autogroup.num_reserved_entries = 1; + ft_attr.autogroup.max_num_groups = 1; + ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr); + if (IS_ERR(ft)) { + err = PTR_ERR(ft); + mlx5_core_err(mdev, "fail to create psp rx ft err=%d\n", err); + goto out_err; + } + fs_prot->ft = ft; + + /* Create miss_group */ + MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ft->max_fte - 1); + MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, ft->max_fte - 1); + miss_group = mlx5_create_flow_group(ft, flow_group_in); + if (IS_ERR(miss_group)) { + err = PTR_ERR(miss_group); + mlx5_core_err(mdev, "fail to create psp rx miss_group err=%d\n", err); + goto out_err; + } + fs_prot->miss_group = miss_group; + + /* Create miss rule */ + rule = mlx5_add_flow_rules(ft, spec, &flow_act, &fs_prot->default_dest, 1); + if (IS_ERR(rule)) { + err = PTR_ERR(rule); + mlx5_core_err(mdev, "fail to create psp rx miss_rule err=%d\n", err); + goto out_err; + } + fs_prot->miss_rule = rule; + + /* Add default Rx psp rule */ + setup_fte_udp_psp(spec, PSP_DEFAULT_UDP_PORT); + flow_act.crypto.type = MLX5_FLOW_CONTEXT_ENCRYPT_DECRYPT_TYPE_PSP; + /* Set bit[31, 30] PSP marker */ + /* Set bit[29-23] psp_syndrome is set in error FT */ +#define MLX5E_PSP_MARKER_BIT (BIT(30) | BIT(31)) + MLX5_SET(set_action_in, action, action_type, MLX5_ACTION_TYPE_SET); + MLX5_SET(set_action_in, action, field, MLX5_ACTION_IN_FIELD_METADATA_REG_B); + MLX5_SET(set_action_in, action, data, MLX5E_PSP_MARKER_BIT); + MLX5_SET(set_action_in, action, offset, 0); + MLX5_SET(set_action_in, action, length, 32); + + modify_hdr = mlx5_modify_header_alloc(mdev, MLX5_FLOW_NAMESPACE_KERNEL, 1, action); + if (IS_ERR(modify_hdr)) { + err = PTR_ERR(modify_hdr); + mlx5_core_err(mdev, "fail to alloc psp set modify_header_id err=%d\n", err); + modify_hdr = NULL; + goto out_err; + } + + flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | + MLX5_FLOW_CONTEXT_ACTION_CRYPTO_DECRYPT | + MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; + flow_act.modify_hdr = modify_hdr; + dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; + dest.ft = fs_prot->rx_err.ft; + rule = mlx5_add_flow_rules(fs_prot->ft, spec, &flow_act, &dest, 1); + if (IS_ERR(rule)) { + err = PTR_ERR(rule); + mlx5_core_err(mdev, + "fail to add psp rule Rx decryption, err=%d, flow_act.action = %#04X\n", + err, flow_act.action); + goto out_err; + } + + fs_prot->def_rule = rule; + goto out; + +out_err: + accel_psp_fs_rx_fs_destroy(fs_prot); +out: + kvfree(flow_group_in); + kvfree(spec); + return err; +} + +static int accel_psp_fs_rx_destroy(struct mlx5e_psp_fs *fs, enum accel_fs_psp_type type) +{ + struct mlx5e_accel_fs_psp_prot *fs_prot; + struct mlx5e_accel_fs_psp *accel_psp; + + accel_psp = fs->rx_fs; + + /* The netdev unreg already happened, so all offloaded rule are already removed */ + fs_prot = &accel_psp->fs_prot[type]; + + accel_psp_fs_rx_fs_destroy(fs_prot); + + accel_psp_fs_rx_err_destroy_ft(fs, &fs_prot->rx_err); + + return 0; +} + +static int accel_psp_fs_rx_create(struct mlx5e_psp_fs *fs, enum accel_fs_psp_type type) +{ + struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(fs->fs, false); + struct mlx5e_accel_fs_psp_prot *fs_prot; + struct mlx5e_accel_fs_psp *accel_psp; + int err; + + accel_psp = fs->rx_fs; + fs_prot = &accel_psp->fs_prot[type]; + + fs_prot->default_dest = mlx5_ttc_get_default_dest(ttc, fs_psp2tt(type)); + + err = accel_psp_fs_rx_err_create_ft(fs, fs_prot, &fs_prot->rx_err); + if (err) + return err; + + err = accel_psp_fs_rx_create_ft(fs, fs_prot); + if (err) + accel_psp_fs_rx_err_destroy_ft(fs, &fs_prot->rx_err); + + return err; +} + +static int accel_psp_fs_rx_ft_get(struct mlx5e_psp_fs *fs, enum accel_fs_psp_type type) +{ + struct mlx5e_accel_fs_psp_prot *fs_prot; + struct mlx5_flow_destination dest = {}; + struct mlx5e_accel_fs_psp *accel_psp; + struct mlx5_ttc_table *ttc; + int err = 0; + + if (!fs || !fs->rx_fs) + return -EINVAL; + + ttc = mlx5e_fs_get_ttc(fs->fs, false); + accel_psp = fs->rx_fs; + fs_prot = &accel_psp->fs_prot[type]; + mutex_lock(&fs_prot->prot_mutex); + if (fs_prot->refcnt++) + goto out; + + /* create FT */ + err = accel_psp_fs_rx_create(fs, type); + if (err) { + fs_prot->refcnt--; + goto out; + } + + /* connect */ + dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; + dest.ft = fs_prot->ft; + mlx5_ttc_fwd_dest(ttc, fs_psp2tt(type), &dest); + +out: + mutex_unlock(&fs_prot->prot_mutex); + return err; +} + +static void accel_psp_fs_rx_ft_put(struct mlx5e_psp_fs *fs, enum accel_fs_psp_type type) +{ + struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(fs->fs, false); + struct mlx5e_accel_fs_psp_prot *fs_prot; + struct mlx5e_accel_fs_psp *accel_psp; + + accel_psp = fs->rx_fs; + fs_prot = &accel_psp->fs_prot[type]; + mutex_lock(&fs_prot->prot_mutex); + if (--fs_prot->refcnt) + goto out; + + /* disconnect */ + mlx5_ttc_fwd_default_dest(ttc, fs_psp2tt(type)); + + /* remove FT */ + accel_psp_fs_rx_destroy(fs, type); + +out: + mutex_unlock(&fs_prot->prot_mutex); +} + +static void accel_psp_fs_cleanup_rx(struct mlx5e_psp_fs *fs) +{ + struct mlx5e_accel_fs_psp_prot *fs_prot; + struct mlx5e_accel_fs_psp *accel_psp; + enum accel_fs_psp_type i; + + if (!fs->rx_fs) + return; + + for (i = 0; i < ACCEL_FS_PSP_NUM_TYPES; i++) + accel_psp_fs_rx_ft_put(fs, i); + + accel_psp = fs->rx_fs; + for (i = 0; i < ACCEL_FS_PSP_NUM_TYPES; i++) { + fs_prot = &accel_psp->fs_prot[i]; + mutex_destroy(&fs_prot->prot_mutex); + WARN_ON(fs_prot->refcnt); + } + kfree(fs->rx_fs); + fs->rx_fs = NULL; +} + +static int accel_psp_fs_init_rx(struct mlx5e_psp_fs *fs) +{ + struct mlx5e_accel_fs_psp_prot *fs_prot; + struct mlx5e_accel_fs_psp *accel_psp; + enum accel_fs_psp_type i; + + accel_psp = kzalloc(sizeof(*accel_psp), GFP_KERNEL); + if (!accel_psp) + return -ENOMEM; + + for (i = 0; i < ACCEL_FS_PSP_NUM_TYPES; i++) { + fs_prot = &accel_psp->fs_prot[i]; + mutex_init(&fs_prot->prot_mutex); + } + + for (i = 0; i < ACCEL_FS_PSP_NUM_TYPES; i++) + accel_psp_fs_rx_ft_get(fs, ACCEL_FS_PSP4); + + fs->rx_fs = accel_psp; + return 0; +} + static int accel_psp_fs_tx_create_ft_table(struct mlx5e_psp_fs *fs) { int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); @@ -206,6 +669,7 @@ int mlx5_accel_psp_fs_init_tx_tables(struct mlx5e_priv *priv) void mlx5e_accel_psp_fs_cleanup(struct mlx5e_psp_fs *fs) { + accel_psp_fs_cleanup_rx(fs); accel_psp_fs_cleanup_tx(fs); kfree(fs); } @@ -225,8 +689,14 @@ struct mlx5e_psp_fs *mlx5e_accel_psp_fs_init(struct mlx5e_priv *priv) goto err_tx; fs->fs = priv->fs; + err = accel_psp_fs_init_rx(fs); + if (err) + goto err_rx; return fs; + +err_rx: + accel_psp_fs_cleanup_tx(fs); err_tx: kfree(fs); return ERR_PTR(err); -- 2.47.3