netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads
@ 2018-01-07 18:36 Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 01/10] net: qualcomm: rmnet: Remove redundant check when stamping map header Subash Abhinov Kasiviswanathan
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

This series introduces the MAPv4 packet format for checksum
offload plus some other minor changes.

Patches 1-3 are cleanups.

Patch 4 renames the ingress format to data format so that all data
formats can be configured using this going forward.

Patch 5 uses the pacing helper to improve TCP transmit performance.

Patch 6-9 defines the the MAPv4 for checksum offload for RX and TX.
A new header and trailer format are used as part of MAPv4.
For RX checksum offload, only the 1's complement of the IP payload
portion is computed by hardware. The meta data from RX header is
used to verify the checksum field in the packet. Note that the
IP packet and its field itself is not modified by hardware.
This gives metadata to help with the RX checksum. For TX, the
required metadata is filled up so hardware can compute the
checksum.

Patch 10 enables GSO on rmnet devices

v1->v2: Fix sparse errors reported by kbuild test robot

v2->v3: Update the commit message for Patch 5 based on Eric's comments

Subash Abhinov Kasiviswanathan (10):
  net: qualcomm: rmnet: Remove redundant check when stamping map header
  net: qualcomm: rmnet: Remove invalid condition while stamping mux id
  net: qualcomm: rmnet: Remove unused function declaration
  net: qualcomm: rmnet: Rename ingress data format to data format
  net: qualcomm: rmnet: Set pacing shift
  net: qualcomm: rmnet: Define the MAPv4 packet formats
  net: qualcomm: rmnet: Add support for RX checksum offload
  net: qualcomm: rmnet: Handle command packets with checksum trailer
  net: qualcomm: rmnet: Add support for TX checksum offload
  net: qualcomm: rmnet: Add support for GSO

 drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c |  10 +-
 drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h |   2 +-
 .../net/ethernet/qualcomm/rmnet/rmnet_handlers.c   |  36 ++-
 drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h    |  23 +-
 .../ethernet/qualcomm/rmnet/rmnet_map_command.c    |  17 +-
 .../net/ethernet/qualcomm/rmnet/rmnet_map_data.c   | 309 ++++++++++++++++++++-
 .../net/ethernet/qualcomm/rmnet/rmnet_private.h    |   2 +
 drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c    |   4 +
 8 files changed, 378 insertions(+), 25 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH net-next v3 RESEND 01/10] net: qualcomm: rmnet: Remove redundant check when stamping map header
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
@ 2018-01-07 18:36 ` Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 02/10] net: qualcomm: rmnet: Remove invalid condition while stamping mux id Subash Abhinov Kasiviswanathan
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

We already check the headroom once in rmnet_map_egress_handler(),
so this is not needed.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
index 86b8c75..978ce26 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
@@ -32,9 +32,6 @@ struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
 	u32 padding, map_datalen;
 	u8 *padbytes;
 
-	if (skb_headroom(skb) < sizeof(struct rmnet_map_header))
-		return NULL;
-
 	map_datalen = skb->len - hdrlen;
 	map_header = (struct rmnet_map_header *)
 			skb_push(skb, sizeof(struct rmnet_map_header));
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v3 RESEND 02/10] net: qualcomm: rmnet: Remove invalid condition while stamping mux id
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 01/10] net: qualcomm: rmnet: Remove redundant check when stamping map header Subash Abhinov Kasiviswanathan
@ 2018-01-07 18:36 ` Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 03/10] net: qualcomm: rmnet: Remove unused function declaration Subash Abhinov Kasiviswanathan
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

rmnet devices cannot have a mux id of 255. This is validated when
assigning the mux id to the rmnet devices. As a result, checking for
mux id 255 does not apply in egress path.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
index 0553932..b2d317e3 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
@@ -143,10 +143,7 @@ static int rmnet_map_egress_handler(struct sk_buff *skb,
 	if (!map_header)
 		goto fail;
 
-	if (mux_id == 0xff)
-		map_header->mux_id = 0;
-	else
-		map_header->mux_id = mux_id;
+	map_header->mux_id = mux_id;
 
 	skb->protocol = htons(ETH_P_MAP);
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v3 RESEND 03/10] net: qualcomm: rmnet: Remove unused function declaration
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 01/10] net: qualcomm: rmnet: Remove redundant check when stamping map header Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 02/10] net: qualcomm: rmnet: Remove invalid condition while stamping mux id Subash Abhinov Kasiviswanathan
@ 2018-01-07 18:36 ` Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 04/10] net: qualcomm: rmnet: Rename ingress data format to data format Subash Abhinov Kasiviswanathan
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

rmnet_map_demultiplex() is only declared but not defined anywhere,
so remove it.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
index 4df359d..ef0eff2 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
@@ -67,7 +67,6 @@ struct rmnet_map_header {
 #define RMNET_MAP_NO_PAD_BYTES        0
 #define RMNET_MAP_ADD_PAD_BYTES       1
 
-u8 rmnet_map_demultiplex(struct sk_buff *skb);
 struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb);
 struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
 						  int hdrlen, int pad);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v3 RESEND 04/10] net: qualcomm: rmnet: Rename ingress data format to data format
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
                   ` (2 preceding siblings ...)
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 03/10] net: qualcomm: rmnet: Remove unused function declaration Subash Abhinov Kasiviswanathan
@ 2018-01-07 18:36 ` Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 05/10] net: qualcomm: rmnet: Set pacing shift Subash Abhinov Kasiviswanathan
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

This is done so that we can use this field for both ingress and
egress flags.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c   | 10 +++++-----
 drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h   |  2 +-
 drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c |  5 ++---
 3 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
index cedacdd..7e7704d 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
@@ -143,7 +143,7 @@ static int rmnet_newlink(struct net *src_net, struct net_device *dev,
 			 struct nlattr *tb[], struct nlattr *data[],
 			 struct netlink_ext_ack *extack)
 {
-	int ingress_format = RMNET_INGRESS_FORMAT_DEAGGREGATION;
+	u32 data_format = RMNET_INGRESS_FORMAT_DEAGGREGATION;
 	struct net_device *real_dev;
 	int mode = RMNET_EPMODE_VND;
 	struct rmnet_endpoint *ep;
@@ -185,11 +185,11 @@ static int rmnet_newlink(struct net *src_net, struct net_device *dev,
 		struct ifla_vlan_flags *flags;
 
 		flags = nla_data(data[IFLA_VLAN_FLAGS]);
-		ingress_format = flags->flags & flags->mask;
+		data_format = flags->flags & flags->mask;
 	}
 
-	netdev_dbg(dev, "data format [ingress 0x%08X]\n", ingress_format);
-	port->ingress_data_format = ingress_format;
+	netdev_dbg(dev, "data format [0x%08X]\n", data_format);
+	port->data_format = data_format;
 
 	return 0;
 
@@ -353,7 +353,7 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[],
 		struct ifla_vlan_flags *flags;
 
 		flags = nla_data(data[IFLA_VLAN_FLAGS]);
-		port->ingress_data_format = flags->flags & flags->mask;
+		port->data_format = flags->flags & flags->mask;
 	}
 
 	return 0;
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
index 2ea9fe3..00e4634 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
@@ -32,7 +32,7 @@ struct rmnet_endpoint {
  */
 struct rmnet_port {
 	struct net_device *dev;
-	u32 ingress_data_format;
+	u32 data_format;
 	u8 nr_rmnet_devs;
 	u8 rmnet_mode;
 	struct hlist_head muxed_ep[RMNET_MAX_LOGICAL_EP];
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
index b2d317e3..8e1f43a 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
@@ -69,8 +69,7 @@ static void rmnet_set_skb_proto(struct sk_buff *skb)
 	u16 len;
 
 	if (RMNET_MAP_GET_CD_BIT(skb)) {
-		if (port->ingress_data_format
-		    & RMNET_INGRESS_FORMAT_MAP_COMMANDS)
+		if (port->data_format & RMNET_INGRESS_FORMAT_MAP_COMMANDS)
 			return rmnet_map_command(skb, port);
 
 		goto free_skb;
@@ -114,7 +113,7 @@ static void rmnet_set_skb_proto(struct sk_buff *skb)
 		skb_push(skb, ETH_HLEN);
 	}
 
-	if (port->ingress_data_format & RMNET_INGRESS_FORMAT_DEAGGREGATION) {
+	if (port->data_format & RMNET_INGRESS_FORMAT_DEAGGREGATION) {
 		while ((skbn = rmnet_map_deaggregate(skb)) != NULL)
 			__rmnet_map_ingress_handler(skbn, port);
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v3 RESEND 05/10] net: qualcomm: rmnet: Set pacing shift
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
                   ` (3 preceding siblings ...)
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 04/10] net: qualcomm: rmnet: Rename ingress data format to data format Subash Abhinov Kasiviswanathan
@ 2018-01-07 18:36 ` Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 06/10] net: qualcomm: rmnet: Define the MAPv4 packet formats Subash Abhinov Kasiviswanathan
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

The real device over which the rmnet devices are installed also
aggregate multiple IP packets and sends them as a single large
aggregate frame to the hardware. This causes degraded throughput
for TCP TX due to bufferbloat.

To overcome this problem, pacing shift value of 8 is set using the
sk_pacing_shift_update() helper. This value was determined based
on experiments with a single stream TCP TX using iperf for a
duration of 30s.

Pacing shift | Observed data rate (Mbps)
          10 | 9
           9 | 140
           8 | 146 (Max link rate)

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
index 8e1f43a..8f8c4f2 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
@@ -16,6 +16,7 @@
 #include <linux/netdevice.h>
 #include <linux/netdev_features.h>
 #include <linux/if_arp.h>
+#include <net/sock.h>
 #include "rmnet_private.h"
 #include "rmnet_config.h"
 #include "rmnet_vnd.h"
@@ -204,6 +205,8 @@ void rmnet_egress_handler(struct sk_buff *skb)
 	struct rmnet_priv *priv;
 	u8 mux_id;
 
+	sk_pacing_shift_update(skb->sk, 8);
+
 	orig_dev = skb->dev;
 	priv = netdev_priv(orig_dev);
 	skb->dev = priv->real_dev;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v3 RESEND 06/10] net: qualcomm: rmnet: Define the MAPv4 packet formats
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
                   ` (4 preceding siblings ...)
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 05/10] net: qualcomm: rmnet: Set pacing shift Subash Abhinov Kasiviswanathan
@ 2018-01-07 18:36 ` Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 07/10] net: qualcomm: rmnet: Add support for RX checksum offload Subash Abhinov Kasiviswanathan
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

The MAPv4 packet format adds support for RX / TX checksum offload.
For a bi-directional UDP stream at a rate of 570 / 146 Mbps, roughly
10% CPU cycles are saved.

For receive path, there is a checksum trailer appended to the end of
the MAP packet. The valid field indicates if hardware has computed
the checksum. csum_start_offset indicates the offset from the start
of the IP header from which hardware has computed checksum.
csum_length is the number of bytes over which the checksum was
computed and the resulting value is csum_value.

In the transmit path, a header is appended between the end of the MAP
header and the start of the IP packet. csum_start_offset is the offset
in bytes from which hardware will compute the checksum if the
csum_enabled bit is set. udp_ip4_ind indicates if the checksum
value of 0 is valid or not. csum_insert_offset is the offset from the
csum_start_offset where hardware will insert the computed checksum.

The use of this additional packet format for checksum offload is
explained in subsequent patches.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h     | 16 ++++++++++++++++
 drivers/net/ethernet/qualcomm/rmnet/rmnet_private.h |  2 ++
 2 files changed, 18 insertions(+)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
index ef0eff2..50c50cd 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
@@ -47,6 +47,22 @@ struct rmnet_map_header {
 	u16 pkt_len;
 }  __aligned(1);
 
+struct rmnet_map_dl_csum_trailer {
+	u8  reserved1;
+	u8  valid:1;
+	u8  reserved2:7;
+	u16 csum_start_offset;
+	u16 csum_length;
+	__be16 csum_value;
+} __aligned(1);
+
+struct rmnet_map_ul_csum_header {
+	__be16 csum_start_offset;
+	u16 csum_insert_offset:14;
+	u16 udp_ip4_ind:1;
+	u16 csum_enabled:1;
+} __aligned(1);
+
 #define RMNET_MAP_GET_MUX_ID(Y) (((struct rmnet_map_header *) \
 				 (Y)->data)->mux_id)
 #define RMNET_MAP_GET_CD_BIT(Y) (((struct rmnet_map_header *) \
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_private.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_private.h
index d214280..de0143e 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_private.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_private.h
@@ -21,6 +21,8 @@
 /* Constants */
 #define RMNET_INGRESS_FORMAT_DEAGGREGATION      BIT(0)
 #define RMNET_INGRESS_FORMAT_MAP_COMMANDS       BIT(1)
+#define RMNET_INGRESS_FORMAT_MAP_CKSUMV4        BIT(2)
+#define RMNET_EGRESS_FORMAT_MAP_CKSUMV4         BIT(3)
 
 /* Replace skb->dev to a virtual rmnet device and pass up the stack */
 #define RMNET_EPMODE_VND (1)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v3 RESEND 07/10] net: qualcomm: rmnet: Add support for RX checksum offload
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
                   ` (5 preceding siblings ...)
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 06/10] net: qualcomm: rmnet: Define the MAPv4 packet formats Subash Abhinov Kasiviswanathan
@ 2018-01-07 18:36 ` Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 08/10] net: qualcomm: rmnet: Handle command packets with checksum trailer Subash Abhinov Kasiviswanathan
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

When using the MAPv4 packet format, receive checksum offload can be
enabled in hardware. The checksum computation over pseudo header is
not offloaded but the rest of the checksum computation over
the payload is offloaded. This applies only for TCP / UDP packets
which are not fragmented.

rmnet validates the TCP/UDP checksum for the packet using the checksum
from the checksum trailer added to the packet by hardware. The
validation performed is as following -

1. Perform 1's complement over the checksum value from the trailer
2. Compute 1's complement checksum over IPv4 / IPv6 header and
   subtracts it from the value from step 1
3. Computes 1's complement checksum over IPv4 / IPv6 pseudo header and
   adds it to the value from step 2
4. Subtracts the checksum value from the TCP / UDP header from the
   value from step 3.
5. Compares the value from step 4 to the checksum value from the
   TCP / UDP header.
6. If the comparison in step 5 succeeds, CHECKSUM_UNNECESSARY is set
   and the packet is passed on to network stack. If there is a
   failure, then the packet is passed on as such without modifying
   the ip_summed field.

The checksum field is also checked for UDP checksum 0 as per RFC 768
and for unexpected TCP checksum of 0.

If checksum offload is disabled when using MAPv4 packet format in
receive path, the packet is queued as is to network stack without
the validations above.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 .../net/ethernet/qualcomm/rmnet/rmnet_handlers.c   |  15 +-
 drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h    |   4 +-
 .../net/ethernet/qualcomm/rmnet/rmnet_map_data.c   | 186 ++++++++++++++++++++-
 drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c    |   2 +
 4 files changed, 201 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
index 8f8c4f2..3409458 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
@@ -66,8 +66,8 @@ static void rmnet_set_skb_proto(struct sk_buff *skb)
 			    struct rmnet_port *port)
 {
 	struct rmnet_endpoint *ep;
+	u16 len, pad;
 	u8 mux_id;
-	u16 len;
 
 	if (RMNET_MAP_GET_CD_BIT(skb)) {
 		if (port->data_format & RMNET_INGRESS_FORMAT_MAP_COMMANDS)
@@ -77,7 +77,8 @@ static void rmnet_set_skb_proto(struct sk_buff *skb)
 	}
 
 	mux_id = RMNET_MAP_GET_MUX_ID(skb);
-	len = RMNET_MAP_GET_LENGTH(skb) - RMNET_MAP_GET_PAD(skb);
+	pad = RMNET_MAP_GET_PAD(skb);
+	len = RMNET_MAP_GET_LENGTH(skb) - pad;
 
 	if (mux_id >= RMNET_MAX_LOGICAL_EP)
 		goto free_skb;
@@ -90,8 +91,14 @@ static void rmnet_set_skb_proto(struct sk_buff *skb)
 
 	/* Subtract MAP header */
 	skb_pull(skb, sizeof(struct rmnet_map_header));
-	skb_trim(skb, len);
 	rmnet_set_skb_proto(skb);
+
+	if (port->data_format & RMNET_INGRESS_FORMAT_MAP_CKSUMV4) {
+		if (!rmnet_map_checksum_downlink_packet(skb, len + pad))
+			skb->ip_summed = CHECKSUM_UNNECESSARY;
+	}
+
+	skb_trim(skb, len);
 	rmnet_deliver_skb(skb);
 	return;
 
@@ -115,7 +122,7 @@ static void rmnet_set_skb_proto(struct sk_buff *skb)
 	}
 
 	if (port->data_format & RMNET_INGRESS_FORMAT_DEAGGREGATION) {
-		while ((skbn = rmnet_map_deaggregate(skb)) != NULL)
+		while ((skbn = rmnet_map_deaggregate(skb, port)) != NULL)
 			__rmnet_map_ingress_handler(skbn, port);
 
 		consume_skb(skb);
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
index 50c50cd..ca9f473 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
@@ -83,9 +83,11 @@ struct rmnet_map_ul_csum_header {
 #define RMNET_MAP_NO_PAD_BYTES        0
 #define RMNET_MAP_ADD_PAD_BYTES       1
 
-struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb);
+struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb,
+				      struct rmnet_port *port);
 struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
 						  int hdrlen, int pad);
 void rmnet_map_command(struct sk_buff *skb, struct rmnet_port *port);
+int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len);
 
 #endif /* _RMNET_MAP_H_ */
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
index 978ce26..881c1dc 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
@@ -14,6 +14,9 @@
  */
 
 #include <linux/netdevice.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <net/ip6_checksum.h>
 #include "rmnet_config.h"
 #include "rmnet_map.h"
 #include "rmnet_private.h"
@@ -21,6 +24,153 @@
 #define RMNET_MAP_DEAGGR_SPACING  64
 #define RMNET_MAP_DEAGGR_HEADROOM (RMNET_MAP_DEAGGR_SPACING / 2)
 
+static __sum16 *rmnet_map_get_csum_field(unsigned char protocol,
+					 const void *txporthdr)
+{
+	__sum16 *check = NULL;
+
+	switch (protocol) {
+	case IPPROTO_TCP:
+		check = &(((struct tcphdr *)txporthdr)->check);
+		break;
+
+	case IPPROTO_UDP:
+		check = &(((struct udphdr *)txporthdr)->check);
+		break;
+
+	default:
+		check = NULL;
+		break;
+	}
+
+	return check;
+}
+
+static int
+rmnet_map_ipv4_dl_csum_trailer(struct sk_buff *skb,
+			       struct rmnet_map_dl_csum_trailer *csum_trailer)
+{
+	__sum16 *csum_field, csum_temp, pseudo_csum, hdr_csum, ip_payload_csum;
+	u16 csum_value, csum_value_final;
+	struct iphdr *ip4h;
+	void *txporthdr;
+	__be16 addend;
+
+	ip4h = (struct iphdr *)(skb->data);
+	if ((ntohs(ip4h->frag_off) & IP_MF) ||
+	    ((ntohs(ip4h->frag_off) & IP_OFFSET) > 0))
+		return -EOPNOTSUPP;
+
+	txporthdr = skb->data + ip4h->ihl * 4;
+
+	csum_field = rmnet_map_get_csum_field(ip4h->protocol, txporthdr);
+
+	if (!csum_field)
+		return -EPROTONOSUPPORT;
+
+	/* RFC 768 - Skip IPv4 UDP packets where sender checksum field is 0 */
+	if (*csum_field == 0 && ip4h->protocol == IPPROTO_UDP)
+		return 0;
+
+	csum_value = ~ntohs(csum_trailer->csum_value);
+	hdr_csum = ~ip_fast_csum(ip4h, (int)ip4h->ihl);
+	ip_payload_csum = csum16_sub((__force __sum16)csum_value,
+				     (__force __be16)hdr_csum);
+
+	pseudo_csum = ~csum_tcpudp_magic(ip4h->saddr, ip4h->daddr,
+					 ntohs(ip4h->tot_len) - ip4h->ihl * 4,
+					 ip4h->protocol, 0);
+	addend = (__force __be16)ntohs((__force __be16)pseudo_csum);
+	pseudo_csum = csum16_add(ip_payload_csum, addend);
+
+	addend = (__force __be16)ntohs((__force __be16)*csum_field);
+	csum_temp = ~csum16_sub(pseudo_csum, addend);
+	csum_value_final = (__force u16)csum_temp;
+
+	if (unlikely(csum_value_final == 0)) {
+		switch (ip4h->protocol) {
+		case IPPROTO_UDP:
+			/* RFC 768 - DL4 1's complement rule for UDP csum 0 */
+			csum_value_final = ~csum_value_final;
+			break;
+
+		case IPPROTO_TCP:
+			/* DL4 Non-RFC compliant TCP checksum found */
+			if (*csum_field == (__force __sum16)0xFFFF)
+				csum_value_final = ~csum_value_final;
+			break;
+		}
+	}
+
+	if (csum_value_final == ntohs((__force __be16)*csum_field))
+		return 0;
+	else
+		return -EINVAL;
+}
+
+#if IS_ENABLED(CONFIG_IPV6)
+static int
+rmnet_map_ipv6_dl_csum_trailer(struct sk_buff *skb,
+			       struct rmnet_map_dl_csum_trailer *csum_trailer)
+{
+	__sum16 *csum_field, ip6_payload_csum, pseudo_csum, csum_temp;
+	u16 csum_value, csum_value_final;
+	__be16 ip6_hdr_csum, addend;
+	struct ipv6hdr *ip6h;
+	void *txporthdr;
+	u32 length;
+
+	ip6h = (struct ipv6hdr *)(skb->data);
+
+	txporthdr = skb->data + sizeof(struct ipv6hdr);
+	csum_field = rmnet_map_get_csum_field(ip6h->nexthdr, txporthdr);
+
+	if (!csum_field)
+		return -EPROTONOSUPPORT;
+
+	csum_value = ~ntohs(csum_trailer->csum_value);
+	ip6_hdr_csum = (__force __be16)
+			~ntohs((__force __be16)ip_compute_csum(ip6h,
+			       (int)(txporthdr - (void *)(skb->data))));
+	ip6_payload_csum = csum16_sub((__force __sum16)csum_value,
+				      ip6_hdr_csum);
+
+	length = (ip6h->nexthdr == IPPROTO_UDP) ?
+		 ntohs(((struct udphdr *)txporthdr)->len) :
+		 ntohs(ip6h->payload_len);
+	pseudo_csum = ~(csum_ipv6_magic(&ip6h->saddr, &ip6h->daddr,
+			     length, ip6h->nexthdr, 0));
+	addend = (__force __be16)ntohs((__force __be16)pseudo_csum);
+	pseudo_csum = csum16_add(ip6_payload_csum, addend);
+
+	addend = (__force __be16)ntohs((__force __be16)*csum_field);
+	csum_temp = ~csum16_sub(pseudo_csum, addend);
+	csum_value_final = (__force u16)csum_temp;
+
+	if (unlikely(csum_value_final == 0)) {
+		switch (ip6h->nexthdr) {
+		case IPPROTO_UDP:
+			/* RFC 2460 section 8.1
+			 * DL6 One's complement rule for UDP checksum 0
+			 */
+			csum_value_final = ~csum_value_final;
+			break;
+
+		case IPPROTO_TCP:
+			/* DL6 Non-RFC compliant TCP checksum found */
+			if (*csum_field == (__force __sum16)0xFFFF)
+				csum_value_final = ~csum_value_final;
+			break;
+		}
+	}
+
+	if (csum_value_final == ntohs((__force __be16)*csum_field))
+		return 0;
+	else
+		return -EINVAL;
+}
+#endif
+
 /* Adds MAP header to front of skb->data
  * Padding is calculated and set appropriately in MAP header. Mux ID is
  * initialized to 0.
@@ -66,7 +216,8 @@ struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
  * returned, indicating that there are no more packets to deaggregate. Caller
  * is responsible for freeing the original skb.
  */
-struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb)
+struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb,
+				      struct rmnet_port *port)
 {
 	struct rmnet_map_header *maph;
 	struct sk_buff *skbn;
@@ -78,6 +229,9 @@ struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb)
 	maph = (struct rmnet_map_header *)skb->data;
 	packet_len = ntohs(maph->pkt_len) + sizeof(struct rmnet_map_header);
 
+	if (port->data_format & RMNET_INGRESS_FORMAT_MAP_CKSUMV4)
+		packet_len += sizeof(struct rmnet_map_dl_csum_trailer);
+
 	if (((int)skb->len - (int)packet_len) < 0)
 		return NULL;
 
@@ -97,3 +251,33 @@ struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb)
 
 	return skbn;
 }
+
+/* Validates packet checksums. Function takes a pointer to
+ * the beginning of a buffer which contains the IP payload +
+ * padding + checksum trailer.
+ * Only IPv4 and IPv6 are supported along with TCP & UDP.
+ * Fragmented or tunneled packets are not supported.
+ */
+int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len)
+{
+	struct rmnet_map_dl_csum_trailer *csum_trailer;
+
+	if (unlikely(!(skb->dev->features & NETIF_F_RXCSUM)))
+		return -EOPNOTSUPP;
+
+	csum_trailer = (struct rmnet_map_dl_csum_trailer *)(skb->data + len);
+
+	if (!csum_trailer->valid)
+		return -EINVAL;
+
+	if (skb->protocol == htons(ETH_P_IP))
+		return rmnet_map_ipv4_dl_csum_trailer(skb, csum_trailer);
+	else if (skb->protocol == htons(ETH_P_IPV6))
+#if IS_ENABLED(CONFIG_IPV6)
+		return rmnet_map_ipv6_dl_csum_trailer(skb, csum_trailer);
+#else
+		return -EPROTONOSUPPORT;
+#endif
+
+	return 0;
+}
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
index 5bb29f4..879a2e0 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
@@ -188,6 +188,8 @@ int rmnet_vnd_newlink(u8 id, struct net_device *rmnet_dev,
 	if (rmnet_get_endpoint(port, id))
 		return -EBUSY;
 
+	rmnet_dev->hw_features = NETIF_F_RXCSUM;
+
 	rc = register_netdevice(rmnet_dev);
 	if (!rc) {
 		ep->egress_dev = rmnet_dev;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v3 RESEND 08/10] net: qualcomm: rmnet: Handle command packets with checksum trailer
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
                   ` (6 preceding siblings ...)
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 07/10] net: qualcomm: rmnet: Add support for RX checksum offload Subash Abhinov Kasiviswanathan
@ 2018-01-07 18:36 ` Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 09/10] net: qualcomm: rmnet: Add support for TX checksum offload Subash Abhinov Kasiviswanathan
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

When using the MAPv4 packet format in conjunction with MAP commands,
a dummy DL checksum trailer will be appended to the packet. Before
this packet is sent out as an ACK, the DL checksum trailer needs to be
removed.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c
index 51e6049..6bc328f 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_command.c
@@ -58,11 +58,24 @@ static u8 rmnet_map_do_flow_control(struct sk_buff *skb,
 }
 
 static void rmnet_map_send_ack(struct sk_buff *skb,
-			       unsigned char type)
+			       unsigned char type,
+			       struct rmnet_port *port)
 {
 	struct rmnet_map_control_command *cmd;
 	int xmit_status;
 
+	if (port->data_format & RMNET_INGRESS_FORMAT_MAP_CKSUMV4) {
+		if (skb->len < sizeof(struct rmnet_map_header) +
+		    RMNET_MAP_GET_LENGTH(skb) +
+		    sizeof(struct rmnet_map_dl_csum_trailer)) {
+			kfree_skb(skb);
+			return;
+		}
+
+		skb_trim(skb, skb->len -
+			 sizeof(struct rmnet_map_dl_csum_trailer));
+	}
+
 	skb->protocol = htons(ETH_P_MAP);
 
 	cmd = RMNET_MAP_GET_CMD_START(skb);
@@ -100,5 +113,5 @@ void rmnet_map_command(struct sk_buff *skb, struct rmnet_port *port)
 		break;
 	}
 	if (rc == RMNET_MAP_COMMAND_ACK)
-		rmnet_map_send_ack(skb, rc);
+		rmnet_map_send_ack(skb, rc, port);
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v3 RESEND 09/10] net: qualcomm: rmnet: Add support for TX checksum offload
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
                   ` (7 preceding siblings ...)
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 08/10] net: qualcomm: rmnet: Handle command packets with checksum trailer Subash Abhinov Kasiviswanathan
@ 2018-01-07 18:36 ` Subash Abhinov Kasiviswanathan
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 10/10] net: qualcomm: rmnet: Add support for GSO Subash Abhinov Kasiviswanathan
  2018-01-08 19:01 ` [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads David Miller
  10 siblings, 0 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

TX checksum offload applies to TCP / UDP packets which are not
fragmented using the MAPv4 checksum trailer. The following needs to be
done to have checksum computed in hardware -

1. Set the checksum start offset and inset offset.
2. Set the csum_enabled bit
3. Compute and set 1's complement of partial checksum field in
   transport header.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 .../net/ethernet/qualcomm/rmnet/rmnet_handlers.c   |   8 ++
 drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h    |   2 +
 .../net/ethernet/qualcomm/rmnet/rmnet_map_data.c   | 120 +++++++++++++++++++++
 drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c    |   1 +
 4 files changed, 131 insertions(+)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
index 3409458..601edec 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
@@ -141,11 +141,19 @@ static int rmnet_map_egress_handler(struct sk_buff *skb,
 	additional_header_len = 0;
 	required_headroom = sizeof(struct rmnet_map_header);
 
+	if (port->data_format & RMNET_EGRESS_FORMAT_MAP_CKSUMV4) {
+		additional_header_len = sizeof(struct rmnet_map_ul_csum_header);
+		required_headroom += additional_header_len;
+	}
+
 	if (skb_headroom(skb) < required_headroom) {
 		if (pskb_expand_head(skb, required_headroom, 0, GFP_KERNEL))
 			goto fail;
 	}
 
+	if (port->data_format & RMNET_EGRESS_FORMAT_MAP_CKSUMV4)
+		rmnet_map_checksum_uplink_packet(skb, orig_dev);
+
 	map_header = rmnet_map_add_map_header(skb, additional_header_len, 0);
 	if (!map_header)
 		goto fail;
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
index ca9f473..6ce31e2 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h
@@ -89,5 +89,7 @@ struct rmnet_map_header *rmnet_map_add_map_header(struct sk_buff *skb,
 						  int hdrlen, int pad);
 void rmnet_map_command(struct sk_buff *skb, struct rmnet_port *port);
 int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len);
+void rmnet_map_checksum_uplink_packet(struct sk_buff *skb,
+				      struct net_device *orig_dev);
 
 #endif /* _RMNET_MAP_H_ */
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
index 881c1dc..c74a6c5 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c
@@ -171,6 +171,86 @@ static __sum16 *rmnet_map_get_csum_field(unsigned char protocol,
 }
 #endif
 
+static void rmnet_map_complement_ipv4_txporthdr_csum_field(void *iphdr)
+{
+	struct iphdr *ip4h = (struct iphdr *)iphdr;
+	void *txphdr;
+	u16 *csum;
+
+	txphdr = iphdr + ip4h->ihl * 4;
+
+	if (ip4h->protocol == IPPROTO_TCP || ip4h->protocol == IPPROTO_UDP) {
+		csum = (u16 *)rmnet_map_get_csum_field(ip4h->protocol, txphdr);
+		*csum = ~(*csum);
+	}
+}
+
+static void
+rmnet_map_ipv4_ul_csum_header(void *iphdr,
+			      struct rmnet_map_ul_csum_header *ul_header,
+			      struct sk_buff *skb)
+{
+	struct iphdr *ip4h = (struct iphdr *)iphdr;
+	__be16 *hdr = (__be16 *)ul_header, offset;
+
+	offset = htons((__force u16)(skb_transport_header(skb) -
+				     (unsigned char *)iphdr));
+	ul_header->csum_start_offset = offset;
+	ul_header->csum_insert_offset = skb->csum_offset;
+	ul_header->csum_enabled = 1;
+	if (ip4h->protocol == IPPROTO_UDP)
+		ul_header->udp_ip4_ind = 1;
+	else
+		ul_header->udp_ip4_ind = 0;
+
+	/* Changing remaining fields to network order */
+	hdr++;
+	*hdr = htons((__force u16)*hdr);
+
+	skb->ip_summed = CHECKSUM_NONE;
+
+	rmnet_map_complement_ipv4_txporthdr_csum_field(iphdr);
+}
+
+#if IS_ENABLED(CONFIG_IPV6)
+static void rmnet_map_complement_ipv6_txporthdr_csum_field(void *ip6hdr)
+{
+	struct ipv6hdr *ip6h = (struct ipv6hdr *)ip6hdr;
+	void *txphdr;
+	u16 *csum;
+
+	txphdr = ip6hdr + sizeof(struct ipv6hdr);
+
+	if (ip6h->nexthdr == IPPROTO_TCP || ip6h->nexthdr == IPPROTO_UDP) {
+		csum = (u16 *)rmnet_map_get_csum_field(ip6h->nexthdr, txphdr);
+		*csum = ~(*csum);
+	}
+}
+
+static void
+rmnet_map_ipv6_ul_csum_header(void *ip6hdr,
+			      struct rmnet_map_ul_csum_header *ul_header,
+			      struct sk_buff *skb)
+{
+	__be16 *hdr = (__be16 *)ul_header, offset;
+
+	offset = htons((__force u16)(skb_transport_header(skb) -
+				     (unsigned char *)ip6hdr));
+	ul_header->csum_start_offset = offset;
+	ul_header->csum_insert_offset = skb->csum_offset;
+	ul_header->csum_enabled = 1;
+	ul_header->udp_ip4_ind = 0;
+
+	/* Changing remaining fields to network order */
+	hdr++;
+	*hdr = htons((__force u16)*hdr);
+
+	skb->ip_summed = CHECKSUM_NONE;
+
+	rmnet_map_complement_ipv6_txporthdr_csum_field(ip6hdr);
+}
+#endif
+
 /* Adds MAP header to front of skb->data
  * Padding is calculated and set appropriately in MAP header. Mux ID is
  * initialized to 0.
@@ -281,3 +361,43 @@ int rmnet_map_checksum_downlink_packet(struct sk_buff *skb, u16 len)
 
 	return 0;
 }
+
+/* Generates UL checksum meta info header for IPv4 and IPv6 over TCP and UDP
+ * packets that are supported for UL checksum offload.
+ */
+void rmnet_map_checksum_uplink_packet(struct sk_buff *skb,
+				      struct net_device *orig_dev)
+{
+	struct rmnet_map_ul_csum_header *ul_header;
+	void *iphdr;
+
+	ul_header = (struct rmnet_map_ul_csum_header *)
+		    skb_push(skb, sizeof(struct rmnet_map_ul_csum_header));
+
+	if (unlikely(!(orig_dev->features &
+		     (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM))))
+		goto sw_csum;
+
+	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+		iphdr = (char *)ul_header +
+			sizeof(struct rmnet_map_ul_csum_header);
+
+		if (skb->protocol == htons(ETH_P_IP)) {
+			rmnet_map_ipv4_ul_csum_header(iphdr, ul_header, skb);
+			return;
+		} else if (skb->protocol == htons(ETH_P_IPV6)) {
+#if IS_ENABLED(CONFIG_IPV6)
+			rmnet_map_ipv6_ul_csum_header(iphdr, ul_header, skb);
+			return;
+#else
+			goto sw_csum;
+#endif
+		}
+	}
+
+sw_csum:
+	ul_header->csum_start_offset = 0;
+	ul_header->csum_insert_offset = 0;
+	ul_header->csum_enabled = 0;
+	ul_header->udp_ip4_ind = 0;
+}
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
index 879a2e0..f7f57ce 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
@@ -189,6 +189,7 @@ int rmnet_vnd_newlink(u8 id, struct net_device *rmnet_dev,
 		return -EBUSY;
 
 	rmnet_dev->hw_features = NETIF_F_RXCSUM;
+	rmnet_dev->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
 
 	rc = register_netdevice(rmnet_dev);
 	if (!rc) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v3 RESEND 10/10] net: qualcomm: rmnet: Add support for GSO
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
                   ` (8 preceding siblings ...)
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 09/10] net: qualcomm: rmnet: Add support for TX checksum offload Subash Abhinov Kasiviswanathan
@ 2018-01-07 18:36 ` Subash Abhinov Kasiviswanathan
  2018-01-08 19:01 ` [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads David Miller
  10 siblings, 0 replies; 12+ messages in thread
From: Subash Abhinov Kasiviswanathan @ 2018-01-07 18:36 UTC (permalink / raw)
  To: davem, netdev, lkp, edumazet; +Cc: Subash Abhinov Kasiviswanathan

Real devices may support scatter gather(SG), so enable SG on rmnet
devices to use GSO. GSO reduces CPU cycles by 20% for a rate of
146Mpbs for a single stream TCP connection.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
index f7f57ce..570a227 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
@@ -190,6 +190,7 @@ int rmnet_vnd_newlink(u8 id, struct net_device *rmnet_dev,
 
 	rmnet_dev->hw_features = NETIF_F_RXCSUM;
 	rmnet_dev->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
+	rmnet_dev->hw_features |= NETIF_F_SG;
 
 	rc = register_netdevice(rmnet_dev);
 	if (!rc) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads
  2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
                   ` (9 preceding siblings ...)
  2018-01-07 18:36 ` [PATCH net-next v3 RESEND 10/10] net: qualcomm: rmnet: Add support for GSO Subash Abhinov Kasiviswanathan
@ 2018-01-08 19:01 ` David Miller
  10 siblings, 0 replies; 12+ messages in thread
From: David Miller @ 2018-01-08 19:01 UTC (permalink / raw)
  To: subashab; +Cc: netdev, lkp, edumazet

From: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Date: Sun,  7 Jan 2018 11:36:30 -0700

> This series introduces the MAPv4 packet format for checksum
> offload plus some other minor changes.

Series applied, thanks.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-01-08 19:01 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-07 18:36 [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads Subash Abhinov Kasiviswanathan
2018-01-07 18:36 ` [PATCH net-next v3 RESEND 01/10] net: qualcomm: rmnet: Remove redundant check when stamping map header Subash Abhinov Kasiviswanathan
2018-01-07 18:36 ` [PATCH net-next v3 RESEND 02/10] net: qualcomm: rmnet: Remove invalid condition while stamping mux id Subash Abhinov Kasiviswanathan
2018-01-07 18:36 ` [PATCH net-next v3 RESEND 03/10] net: qualcomm: rmnet: Remove unused function declaration Subash Abhinov Kasiviswanathan
2018-01-07 18:36 ` [PATCH net-next v3 RESEND 04/10] net: qualcomm: rmnet: Rename ingress data format to data format Subash Abhinov Kasiviswanathan
2018-01-07 18:36 ` [PATCH net-next v3 RESEND 05/10] net: qualcomm: rmnet: Set pacing shift Subash Abhinov Kasiviswanathan
2018-01-07 18:36 ` [PATCH net-next v3 RESEND 06/10] net: qualcomm: rmnet: Define the MAPv4 packet formats Subash Abhinov Kasiviswanathan
2018-01-07 18:36 ` [PATCH net-next v3 RESEND 07/10] net: qualcomm: rmnet: Add support for RX checksum offload Subash Abhinov Kasiviswanathan
2018-01-07 18:36 ` [PATCH net-next v3 RESEND 08/10] net: qualcomm: rmnet: Handle command packets with checksum trailer Subash Abhinov Kasiviswanathan
2018-01-07 18:36 ` [PATCH net-next v3 RESEND 09/10] net: qualcomm: rmnet: Add support for TX checksum offload Subash Abhinov Kasiviswanathan
2018-01-07 18:36 ` [PATCH net-next v3 RESEND 10/10] net: qualcomm: rmnet: Add support for GSO Subash Abhinov Kasiviswanathan
2018-01-08 19:01 ` [PATCH net-next v3 RESEND 00/10] net: qualcomm: rmnet: Enable csum offloads David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).