netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation
@ 2022-10-19 13:25 Daniele Palmas
  2022-10-19 13:25 ` [PATCH net-next 1/2] net: usb: qmi_wwan: implement qmap uplink tx aggregation Daniele Palmas
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Daniele Palmas @ 2022-10-19 13:25 UTC (permalink / raw)
  To: Bjørn Mork, David Miller, Jakub Kicinski, Paolo Abeni,
	Eric Dumazet
  Cc: linux-usb, netdev, Daniele Palmas

Hello Bjørn and all,

this patchset implements and document tx qmap packets aggregation in qmi_wwan.

Low-cat Thread-x based modem are not capable of properly reaching the maximum
allowed throughput both in tx and rx during a bidirectional test if tx packets
aggregation is not enabled.

I verified this problem by using a MDM9207 Cat. 4 based modem (50Mbps/150Mbps
max throughput). What is actually happening is pictured at
https://drive.google.com/file/d/1xuAuDBfBEIM3Cdg2zHYQJ5tdk-JkfQn7/view?usp=sharing

When rx and tx flows are tested singularly there's no issue in tx and minor
issues in rx (a few spikes). When there are concurrent tx and rx flows, tx
throughput has an huge drop. rx a minor one, but still present.

The same scenario with tx aggregation enabled is pictured at
https://drive.google.com/file/d/1Kw8TVFLVgr31o841fRu4fuMX9DNZqJB5/view?usp=sharing
showing a regular graph.

This issue does not happen with high-cat modems (e.g. SDX20), or at least it
does not happen at the throughputs I'm able to test currently: maybe the same
could happen when moving close to the maximum rates supported by those modems.
Anyway, having the tx aggregation enabled should not hurt.

It is interesting to note that, for what I can understand, rmnet too does not
support tx aggregation.

I'm aware that rmnet should be the preferred way for qmap, but I think there's
still value in adding this feature to qmi_wwan qmap implementation since there
are in the field many users of that.

Moreover, having this in mainline could simplify backporting for those who are
using qmi_wwan qmap feature but are stuck with old kernel versions.

I'm also aware of the fact that sysfs files for configuration are not the
preferred way, but it would feel odd changing the way for configuring the driver
just for this feature, having it different from the previous knobs.

Thanks,
Daniele

Daniele Palmas (2):
  net: usb: qmi_wwan: implement qmap uplink tx aggregation
  Documentation: ABI: sysfs-class-net-qmi: document tx aggregation files

 Documentation/ABI/testing/sysfs-class-net-qmi |  28 ++
 drivers/net/usb/qmi_wwan.c                    | 242 +++++++++++++++++-
 2 files changed, 266 insertions(+), 4 deletions(-)

-- 
2.37.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH net-next 1/2] net: usb: qmi_wwan: implement qmap uplink tx aggregation
  2022-10-19 13:25 [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation Daniele Palmas
@ 2022-10-19 13:25 ` Daniele Palmas
  2022-10-19 13:25 ` [PATCH net-next 2/2] Documentation: ABI: sysfs-class-net-qmi: document tx aggregation files Daniele Palmas
  2022-10-19 15:04 ` [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation Bjørn Mork
  2 siblings, 0 replies; 7+ messages in thread
From: Daniele Palmas @ 2022-10-19 13:25 UTC (permalink / raw)
  To: Bjørn Mork, David Miller, Jakub Kicinski, Paolo Abeni,
	Eric Dumazet
  Cc: linux-usb, netdev, Daniele Palmas

Add qmap uplink tx packets aggregation.

Bidirectional TCP throughput tests through iperf with low-cat
Thread-x based modems showed performance issues both in tx
and rx.

The Windows driver does not show this issue: inspecting USB
packets revealed that the only notable change is the driver
enabling tx packets aggregation.

Tx packets aggregation, by default disabled, requires configuring
the maximum number of aggregated packets and the maximum aggregated
size: this information is provided by the modem and available in
userspace through wda set data format response, so two new
sysfs files are created for driver configuration according
to those values.

This implementation is based on the cdc_ncm code developed by
Bjørn Mork.

Signed-off-by: Daniele Palmas <dnlplm@gmail.com>
---
 drivers/net/usb/qmi_wwan.c | 242 ++++++++++++++++++++++++++++++++++++-
 1 file changed, 238 insertions(+), 4 deletions(-)

diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
index 26c34a7c21bd..304f8126026d 100644
--- a/drivers/net/usb/qmi_wwan.c
+++ b/drivers/net/usb/qmi_wwan.c
@@ -54,6 +54,20 @@ struct qmi_wwan_state {
 	struct usb_interface *data;
 };
 
+struct qmi_wwan_priv {
+	struct sk_buff *qmimux_tx_curr_aggr_skb;
+	struct hrtimer qmimux_tx_timer;
+	struct tasklet_struct bh;
+	/* spinlock for tx packets aggregation */
+	spinlock_t qmimux_tx_mtx;
+	atomic_t stop;
+	u32 timer_interval;
+	u32 qmimux_tx_timer_pending;
+	u32 qmimux_tx_max_datagrams;
+	u32 qmimux_tx_max_size;
+	u32 qmimux_tx_current_datagrams_n;
+};
+
 enum qmi_wwan_flags {
 	QMI_WWAN_FLAG_RAWIP = 1 << 0,
 	QMI_WWAN_FLAG_MUX = 1 << 1,
@@ -94,24 +108,126 @@ static int qmimux_stop(struct net_device *dev)
 	return 0;
 }
 
+static void qmimux_tx_timeout_start(struct qmi_wwan_priv *priv)
+{
+	/* start timer, if not already started */
+	if (!(hrtimer_active(&priv->qmimux_tx_timer) || atomic_read(&priv->stop)))
+		hrtimer_start(&priv->qmimux_tx_timer,
+			      400UL * NSEC_PER_USEC,
+			      HRTIMER_MODE_REL);
+}
+
+static struct sk_buff *
+qmimux_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb,
+		     unsigned int *n, unsigned int *len)
+{
+	struct qmi_wwan_priv *priv = dev->driver_priv;
+	struct sk_buff *skb_current = NULL;
+
+	if (!priv->qmimux_tx_curr_aggr_skb) {
+		/* The incoming skb size should be less than max ul packet aggregated size
+		 * otherwise it is dropped.
+		 */
+		if (skb->len > priv->qmimux_tx_max_size) {
+			*n = 0;
+			goto exit_skb;
+		}
+
+		priv->qmimux_tx_curr_aggr_skb = alloc_skb(priv->qmimux_tx_max_size, GFP_ATOMIC);
+		if (!priv->qmimux_tx_curr_aggr_skb) {
+			/* If memory allocation fails we simply return the skb in input */
+			skb_current = skb;
+		} else {
+			priv->qmimux_tx_curr_aggr_skb->dev = dev->net;
+			priv->qmimux_tx_current_datagrams_n = 1;
+			skb_put_data(priv->qmimux_tx_curr_aggr_skb, skb->data, skb->len);
+			priv->qmimux_tx_timer_pending = 2;
+			dev_kfree_skb_any(skb);
+		}
+	} else {
+		/* Queue the incoming skb */
+		if (skb->len + priv->qmimux_tx_curr_aggr_skb->len > priv->qmimux_tx_max_size) {
+			/* Send the current skb and copy the incoming one in a new buffer */
+			skb_current = priv->qmimux_tx_curr_aggr_skb;
+			*n = priv->qmimux_tx_current_datagrams_n;
+			*len = skb_current->len - priv->qmimux_tx_current_datagrams_n * 4;
+			priv->qmimux_tx_curr_aggr_skb =
+					alloc_skb(priv->qmimux_tx_max_size, GFP_ATOMIC);
+			if (priv->qmimux_tx_curr_aggr_skb) {
+				priv->qmimux_tx_curr_aggr_skb->dev = dev->net;
+				skb_put_data(priv->qmimux_tx_curr_aggr_skb, skb->data, skb->len);
+				dev_kfree_skb_any(skb);
+				priv->qmimux_tx_current_datagrams_n = 1;
+				priv->qmimux_tx_timer_pending = 2;
+				/* Start the timer, since we already have something to send */
+				qmimux_tx_timeout_start(priv);
+			}
+		} else {
+			/* Copy to current skb */
+			skb_put_data(priv->qmimux_tx_curr_aggr_skb, skb->data, skb->len);
+			dev_kfree_skb_any(skb);
+			priv->qmimux_tx_current_datagrams_n++;
+			if (priv->qmimux_tx_current_datagrams_n == priv->qmimux_tx_max_datagrams) {
+				/* Maximum number of datagrams reached, send them */
+				skb_current = priv->qmimux_tx_curr_aggr_skb;
+				*n = priv->qmimux_tx_current_datagrams_n;
+				*len = skb_current->len - priv->qmimux_tx_current_datagrams_n * 4;
+				priv->qmimux_tx_curr_aggr_skb = NULL;
+			} else {
+				priv->qmimux_tx_timer_pending = 2;
+			}
+		}
+	}
+
+exit_skb:
+	if (!skb_current)
+		qmimux_tx_timeout_start(priv);
+
+	return skb_current;
+}
+
 static netdev_tx_t qmimux_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct qmimux_priv *priv = netdev_priv(dev);
+	struct qmi_wwan_priv *usbdev_priv;
 	unsigned int len = skb->len;
+	struct sk_buff *skb_current;
 	struct qmimux_hdr *hdr;
+	struct usbnet *usbdev;
+	unsigned int n = 1;
 	netdev_tx_t ret;
 
+	usbdev = netdev_priv(priv->real_dev);
+	usbdev_priv = usbdev->driver_priv;
+
 	hdr = skb_push(skb, sizeof(struct qmimux_hdr));
 	hdr->pad = 0;
 	hdr->mux_id = priv->mux_id;
 	hdr->pkt_len = cpu_to_be16(len);
 	skb->dev = priv->real_dev;
-	ret = dev_queue_xmit(skb);
 
-	if (likely(ret == NET_XMIT_SUCCESS || ret == NET_XMIT_CN))
-		dev_sw_netstats_tx_add(dev, 1, len);
-	else
+	if (usbdev_priv->qmimux_tx_max_datagrams == 1) {
+		/* No tx aggregation requested */
+		skb_current = skb;
+	} else {
+		spin_lock_bh(&usbdev_priv->qmimux_tx_mtx);
+		skb_current = qmimux_fill_tx_frame(usbdev, skb, &n, &len);
+		spin_unlock_bh(&usbdev_priv->qmimux_tx_mtx);
+	}
+
+	if (skb_current) {
+		ret = dev_queue_xmit(skb_current);
+
+		if (likely(ret == NET_XMIT_SUCCESS || ret == NET_XMIT_CN))
+			dev_sw_netstats_tx_add(dev, n, len);
+		else
+			dev->stats.tx_dropped++;
+	} else if (n == 0) {
 		dev->stats.tx_dropped++;
+		ret = NET_XMIT_DROP;
+	} else {
+		ret = NET_XMIT_SUCCESS;
+	}
 
 	return ret;
 }
@@ -240,6 +356,39 @@ static struct attribute_group qmi_wwan_sysfs_qmimux_attr_group = {
 	.attrs = qmi_wwan_sysfs_qmimux_attrs,
 };
 
+static void qmimux_txpath_bh(struct tasklet_struct *t)
+{
+	struct qmi_wwan_priv *priv = from_tasklet(priv, t, bh);
+
+	if (!priv)
+		return;
+
+	spin_lock(&priv->qmimux_tx_mtx);
+	if (priv->qmimux_tx_timer_pending != 0) {
+		priv->qmimux_tx_timer_pending--;
+		qmimux_tx_timeout_start(priv);
+		spin_unlock(&priv->qmimux_tx_mtx);
+	} else if (priv->qmimux_tx_curr_aggr_skb) {
+		struct sk_buff *skb = priv->qmimux_tx_curr_aggr_skb;
+
+		priv->qmimux_tx_curr_aggr_skb = NULL;
+		spin_unlock(&priv->qmimux_tx_mtx);
+		dev_queue_xmit(skb);
+	} else {
+		spin_unlock(&priv->qmimux_tx_mtx);
+	}
+}
+
+static enum hrtimer_restart qmimux_tx_timer_cb(struct hrtimer *timer)
+{
+	struct qmi_wwan_priv *priv =
+				container_of(timer, struct qmi_wwan_priv, qmimux_tx_timer);
+
+	if (!atomic_read(&priv->stop))
+		tasklet_schedule(&priv->bh);
+	return HRTIMER_NORESTART;
+}
+
 static int qmimux_register_device(struct net_device *real_dev, u8 mux_id)
 {
 	struct net_device *new_dev;
@@ -516,16 +665,79 @@ static ssize_t pass_through_store(struct device *d,
 	return len;
 }
 
+static ssize_t tx_max_datagrams_mux_store(struct device *d,  struct device_attribute *attr,
+					  const char *buf, size_t len)
+{
+	struct usbnet *dev = netdev_priv(to_net_dev(d));
+	struct qmi_wwan_priv *priv = dev->driver_priv;
+	u8 qmimux_tx_max_datagrams;
+
+	if (netif_running(dev->net)) {
+		netdev_err(dev->net, "Cannot change a running device\n");
+		return -EBUSY;
+	}
+
+	if (kstrtou8(buf, 0, &qmimux_tx_max_datagrams))
+		return -EINVAL;
+
+	if (qmimux_tx_max_datagrams < 1)
+		return -EINVAL;
+
+	priv->qmimux_tx_max_datagrams = qmimux_tx_max_datagrams;
+
+	return len;
+}
+
+static ssize_t tx_max_datagrams_mux_show(struct device *d, struct device_attribute *attr, char *buf)
+{
+	struct usbnet *dev = netdev_priv(to_net_dev(d));
+	struct qmi_wwan_priv *priv = dev->driver_priv;
+
+	return sysfs_emit(buf, "%u\n", priv->qmimux_tx_max_datagrams);
+}
+
+static ssize_t tx_max_size_mux_store(struct device *d,  struct device_attribute *attr,
+				     const char *buf, size_t len)
+{
+	struct usbnet *dev = netdev_priv(to_net_dev(d));
+	struct qmi_wwan_priv *priv = dev->driver_priv;
+	unsigned long qmimux_tx_max_size;
+
+	if (netif_running(dev->net)) {
+		netdev_err(dev->net, "Cannot change a running device\n");
+		return -EBUSY;
+	}
+
+	if (kstrtoul(buf, 0, &qmimux_tx_max_size))
+		return -EINVAL;
+
+	priv->qmimux_tx_max_size = qmimux_tx_max_size;
+
+	return len;
+}
+
+static ssize_t tx_max_size_mux_show(struct device *d, struct device_attribute *attr, char *buf)
+{
+	struct usbnet *dev = netdev_priv(to_net_dev(d));
+	struct qmi_wwan_priv *priv = dev->driver_priv;
+
+	return sysfs_emit(buf, "%u\n", priv->qmimux_tx_max_size);
+}
+
 static DEVICE_ATTR_RW(raw_ip);
 static DEVICE_ATTR_RW(add_mux);
 static DEVICE_ATTR_RW(del_mux);
 static DEVICE_ATTR_RW(pass_through);
+static DEVICE_ATTR_RW(tx_max_datagrams_mux);
+static DEVICE_ATTR_RW(tx_max_size_mux);
 
 static struct attribute *qmi_wwan_sysfs_attrs[] = {
 	&dev_attr_raw_ip.attr,
 	&dev_attr_add_mux.attr,
 	&dev_attr_del_mux.attr,
 	&dev_attr_pass_through.attr,
+	&dev_attr_tx_max_datagrams_mux.attr,
+	&dev_attr_tx_max_size_mux.attr,
 	NULL,
 };
 
@@ -752,10 +964,16 @@ static int qmi_wwan_bind(struct usbnet *dev, struct usb_interface *intf)
 	struct usb_driver *driver = driver_of(intf);
 	struct qmi_wwan_state *info = (void *)&dev->data;
 	struct usb_cdc_parsed_header hdr;
+	struct qmi_wwan_priv *priv;
 
 	BUILD_BUG_ON((sizeof(((struct usbnet *)0)->data) <
 		      sizeof(struct qmi_wwan_state)));
 
+	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+	dev->driver_priv = priv;
+
 	/* set up initial state */
 	info->control = intf;
 	info->data = intf;
@@ -824,6 +1042,16 @@ static int qmi_wwan_bind(struct usbnet *dev, struct usb_interface *intf)
 		qmi_wwan_change_dtr(dev, true);
 	}
 
+	/* QMAP tx packets aggregation info */
+	hrtimer_init(&priv->qmimux_tx_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	priv->qmimux_tx_timer.function = &qmimux_tx_timer_cb;
+	tasklet_setup(&priv->bh, qmimux_txpath_bh);
+	atomic_set(&priv->stop, 0);
+	spin_lock_init(&priv->qmimux_tx_mtx);
+	/* tx packets aggregation disabled by default and max size set to default MTU */
+	priv->qmimux_tx_max_datagrams = 1;
+	priv->qmimux_tx_max_size = dev->net->mtu;
+
 	/* Never use the same address on both ends of the link, even if the
 	 * buggy firmware told us to. Or, if device is assigned the well-known
 	 * buggy firmware MAC address, replace it with a random address,
@@ -849,9 +1077,15 @@ static int qmi_wwan_bind(struct usbnet *dev, struct usb_interface *intf)
 static void qmi_wwan_unbind(struct usbnet *dev, struct usb_interface *intf)
 {
 	struct qmi_wwan_state *info = (void *)&dev->data;
+	struct qmi_wwan_priv *priv = dev->driver_priv;
 	struct usb_driver *driver = driver_of(intf);
 	struct usb_interface *other;
 
+	atomic_set(&priv->stop, 1);
+	hrtimer_cancel(&priv->qmimux_tx_timer);
+	tasklet_kill(&priv->bh);
+	kfree(priv);
+
 	if (info->subdriver && info->subdriver->disconnect)
 		info->subdriver->disconnect(info->control);
 
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH net-next 2/2] Documentation: ABI: sysfs-class-net-qmi: document tx aggregation files
  2022-10-19 13:25 [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation Daniele Palmas
  2022-10-19 13:25 ` [PATCH net-next 1/2] net: usb: qmi_wwan: implement qmap uplink tx aggregation Daniele Palmas
@ 2022-10-19 13:25 ` Daniele Palmas
  2022-10-19 15:04 ` [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation Bjørn Mork
  2 siblings, 0 replies; 7+ messages in thread
From: Daniele Palmas @ 2022-10-19 13:25 UTC (permalink / raw)
  To: Bjørn Mork, David Miller, Jakub Kicinski, Paolo Abeni,
	Eric Dumazet
  Cc: linux-usb, netdev, Daniele Palmas

Add documentation for:

/sys/class/net/<iface>/qmi/tx_max_datagrams_mux
/sys/class/net/<iface>/qmi/tx_max_size_mux

related to the qmap tx aggregation feature.

Signed-off-by: Daniele Palmas <dnlplm@gmail.com>
---
 Documentation/ABI/testing/sysfs-class-net-qmi | 28 +++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-class-net-qmi b/Documentation/ABI/testing/sysfs-class-net-qmi
index 47e6b9732337..9c69ffa64b51 100644
--- a/Documentation/ABI/testing/sysfs-class-net-qmi
+++ b/Documentation/ABI/testing/sysfs-class-net-qmi
@@ -74,3 +74,31 @@ Description:
 
 		'Pass-through' mode can be enabled when the device is in
 		'raw-ip' mode only.
+
+What:		/sys/class/net/<iface>/qmi/tx_max_datagrams_mux
+Date:		October 2022
+KernelVersion:	6.2
+Contact:	Daniele Palmas <dnlplm@gmail.com>
+Description:
+		Unsigned integer. Default: 1, meaning tx aggregation disabled
+
+		The maximum number of QMAP aggregated tx packets.
+
+		This value is returned by the modem when calling the QMI request
+		wda set data format with QMAP tx aggregation enabled: userspace
+		should take care of setting the returned value to this file.
+
+What:		/sys/class/net/<iface>/qmi/tx_max_size_mux
+Date:		October 2022
+KernelVersion:	6.2
+Contact:	Daniele Palmas <dnlplm@gmail.com>
+Description:
+		Unsigned integer
+
+		The maximum size in bytes of a block of QMAP aggregated tx packets.
+
+		This value is returned by the modem when calling the QMI request
+		wda set data format with QMAP tx aggregation enabled: userspace
+		should take care of setting the returned value to this file.
+
+		This value is not considered if tx_aggregation is disabled.
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation
  2022-10-19 13:25 [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation Daniele Palmas
  2022-10-19 13:25 ` [PATCH net-next 1/2] net: usb: qmi_wwan: implement qmap uplink tx aggregation Daniele Palmas
  2022-10-19 13:25 ` [PATCH net-next 2/2] Documentation: ABI: sysfs-class-net-qmi: document tx aggregation files Daniele Palmas
@ 2022-10-19 15:04 ` Bjørn Mork
  2022-10-19 15:48   ` Greg KH
  2022-10-19 18:04   ` Daniele Palmas
  2 siblings, 2 replies; 7+ messages in thread
From: Bjørn Mork @ 2022-10-19 15:04 UTC (permalink / raw)
  To: Daniele Palmas
  Cc: David Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet,
	linux-usb, netdev

Daniele Palmas <dnlplm@gmail.com> writes:

> I verified this problem by using a MDM9207 Cat. 4 based modem (50Mbps/150Mbps
> max throughput). What is actually happening is pictured at
> https://drive.google.com/file/d/1xuAuDBfBEIM3Cdg2zHYQJ5tdk-JkfQn7/view?usp=sharing
>
> When rx and tx flows are tested singularly there's no issue in tx and minor
> issues in rx (a few spikes). When there are concurrent tx and rx flows, tx
> throughput has an huge drop. rx a minor one, but still present.
>
> The same scenario with tx aggregation enabled is pictured at
> https://drive.google.com/file/d/1Kw8TVFLVgr31o841fRu4fuMX9DNZqJB5/view?usp=sharing
> showing a regular graph.

That's pretty extreme.  Are these tcp tests?  Did you experiment with
qdisc options? What about latency here?

> This issue does not happen with high-cat modems (e.g. SDX20), or at least it
> does not happen at the throughputs I'm able to test currently: maybe the same
> could happen when moving close to the maximum rates supported by those modems.
> Anyway, having the tx aggregation enabled should not hurt.
>
> It is interesting to note that, for what I can understand, rmnet too does not
> support tx aggregation.

Looks like that is missing, yes. Did you consider implementing it there
instead?

> I'm aware that rmnet should be the preferred way for qmap, but I think there's
> still value in adding this feature to qmi_wwan qmap implementation since there
> are in the field many users of that.
>
> Moreover, having this in mainline could simplify backporting for those who are
> using qmi_wwan qmap feature but are stuck with old kernel versions.
>
> I'm also aware of the fact that sysfs files for configuration are not the
> preferred way, but it would feel odd changing the way for configuring the driver
> just for this feature, having it different from the previous knobs.

It's not just that it's not the preferred way.. I believe I promised
that we wouldn't add anything more to this interface.  And then I broke
that promise, promising that it would never happen again.  So much for
my integrity.

This all looks very nice to me, and the results are great, and it's just
another knob...

But I don't think we can continue adding this stuff.  The QMAP handling
should be done in the rmnet driver. Unless there is some reason it can't
be there? Wouldn't the same code work there?

Luckily I can chicken out here, and leave final the decision to Jakub
and David.



Bjørn

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation
  2022-10-19 15:04 ` [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation Bjørn Mork
@ 2022-10-19 15:48   ` Greg KH
  2022-10-20  0:55     ` Jakub Kicinski
  2022-10-19 18:04   ` Daniele Palmas
  1 sibling, 1 reply; 7+ messages in thread
From: Greg KH @ 2022-10-19 15:48 UTC (permalink / raw)
  To: Bjørn Mork
  Cc: Daniele Palmas, David Miller, Jakub Kicinski, Paolo Abeni,
	Eric Dumazet, linux-usb, netdev

On Wed, Oct 19, 2022 at 05:04:31PM +0200, Bjørn Mork wrote:
> Daniele Palmas <dnlplm@gmail.com> writes:
> > I'm aware that rmnet should be the preferred way for qmap, but I think there's
> > still value in adding this feature to qmi_wwan qmap implementation since there
> > are in the field many users of that.
> >
> > Moreover, having this in mainline could simplify backporting for those who are
> > using qmi_wwan qmap feature but are stuck with old kernel versions.
> >
> > I'm also aware of the fact that sysfs files for configuration are not the
> > preferred way, but it would feel odd changing the way for configuring the driver
> > just for this feature, having it different from the previous knobs.
> 
> It's not just that it's not the preferred way.. I believe I promised
> that we wouldn't add anything more to this interface.  And then I broke
> that promise, promising that it would never happen again.  So much for
> my integrity.
> 
> This all looks very nice to me, and the results are great, and it's just
> another knob...
> 

Please no more sysfs files for stuff like this.  This turns into
vendor-specific random files that no one knows how to change over time
with no way to know what userspace tools are accessing them, or if even
anyone is using them at all.

Shouldn't there be standard ethtool apis for this?

> But I don't think we can continue adding this stuff.  The QMAP handling
> should be done in the rmnet driver. Unless there is some reason it can't
> be there? Wouldn't the same code work there?

rmnet would be better, but again, no new sysfs files please,

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation
  2022-10-19 15:04 ` [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation Bjørn Mork
  2022-10-19 15:48   ` Greg KH
@ 2022-10-19 18:04   ` Daniele Palmas
  1 sibling, 0 replies; 7+ messages in thread
From: Daniele Palmas @ 2022-10-19 18:04 UTC (permalink / raw)
  To: Bjørn Mork
  Cc: David Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet,
	linux-usb, netdev

Il giorno mer 19 ott 2022 alle ore 17:04 Bjørn Mork <bjorn@mork.no> ha scritto:
>
> Daniele Palmas <dnlplm@gmail.com> writes:
>
> > I verified this problem by using a MDM9207 Cat. 4 based modem (50Mbps/150Mbps
> > max throughput). What is actually happening is pictured at
> > https://drive.google.com/file/d/1xuAuDBfBEIM3Cdg2zHYQJ5tdk-JkfQn7/view?usp=sharing
> >
> > When rx and tx flows are tested singularly there's no issue in tx and minor
> > issues in rx (a few spikes). When there are concurrent tx and rx flows, tx
> > throughput has an huge drop. rx a minor one, but still present.
> >
> > The same scenario with tx aggregation enabled is pictured at
> > https://drive.google.com/file/d/1Kw8TVFLVgr31o841fRu4fuMX9DNZqJB5/view?usp=sharing
> > showing a regular graph.
>
> That's pretty extreme.  Are these tcp tests?  Did you experiment with
> qdisc options? What about latency here?
>

Yes, tcp with iperf. I did not try qdisc and haven't measured (yet) latency.

> > This issue does not happen with high-cat modems (e.g. SDX20), or at least it
> > does not happen at the throughputs I'm able to test currently: maybe the same
> > could happen when moving close to the maximum rates supported by those modems.
> > Anyway, having the tx aggregation enabled should not hurt.
> >
> > It is interesting to note that, for what I can understand, rmnet too does not
> > support tx aggregation.
>
> Looks like that is missing, yes. Did you consider implementing it there
> instead?
>

Yes, I thought about it, but it's something that has a broader impact,
since it's not used just with usb, not really comfortable with that
code, but I agree that's the way to go...

> > I'm aware that rmnet should be the preferred way for qmap, but I think there's
> > still value in adding this feature to qmi_wwan qmap implementation since there
> > are in the field many users of that.
> >
> > Moreover, having this in mainline could simplify backporting for those who are
> > using qmi_wwan qmap feature but are stuck with old kernel versions.
> >
> > I'm also aware of the fact that sysfs files for configuration are not the
> > preferred way, but it would feel odd changing the way for configuring the driver
> > just for this feature, having it different from the previous knobs.
>
> It's not just that it's not the preferred way.. I believe I promised
> that we wouldn't add anything more to this interface.  And then I broke
> that promise, promising that it would never happen again.  So much for
> my integrity.
>
> This all looks very nice to me, and the results are great, and it's just
> another knob...
>
> But I don't think we can continue adding this stuff.  The QMAP handling
> should be done in the rmnet driver. Unless there is some reason it can't
> be there? Wouldn't the same code work there?
>

Ok, I admit that your reasoning makes sense.

There's no real reason for not having tx aggregation in rmnet, besides
the fact that no one has added it yet.

There's some downstream code for example at
https://source.codeaurora.org/quic/la/kernel/msm-4.19/tree/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c?h=LA.UM.8.12.3.1#n405

I can try looking at that to see if I'm able to implement the same
feature in mainline rmnet.

Thanks for your comments!

Regards,
Daniele

> Luckily I can chicken out here, and leave final the decision to Jakub
> and David.
>
>
>
> Bjørn

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation
  2022-10-19 15:48   ` Greg KH
@ 2022-10-20  0:55     ` Jakub Kicinski
  0 siblings, 0 replies; 7+ messages in thread
From: Jakub Kicinski @ 2022-10-20  0:55 UTC (permalink / raw)
  To: Greg KH, Daniele Palmas
  Cc: Bjørn Mork, David Miller, Paolo Abeni, Eric Dumazet,
	linux-usb, netdev

On Wed, 19 Oct 2022 17:48:35 +0200 Greg KH wrote:
> > It's not just that it's not the preferred way.. I believe I promised
> > that we wouldn't add anything more to this interface.  And then I broke
> > that promise, promising that it would never happen again.  So much for
> > my integrity.
> > 
> > This all looks very nice to me, and the results are great, and it's just
> > another knob...
> >   
> 
> Please no more sysfs files for stuff like this.  This turns into
> vendor-specific random files that no one knows how to change over time
> with no way to know what userspace tools are accessing them, or if even
> anyone is using them at all.
> 
> Shouldn't there be standard ethtool apis for this?

Not yet, but there should. We can add the new params to 
struct kernel_ethtool_coalesce, and plumb them thru ethtool netlink.
No major surgery required. Feel free to ask for more guidance if the
netlink-y stuff is confusing.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-10-20  0:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-10-19 13:25 [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation Daniele Palmas
2022-10-19 13:25 ` [PATCH net-next 1/2] net: usb: qmi_wwan: implement qmap uplink tx aggregation Daniele Palmas
2022-10-19 13:25 ` [PATCH net-next 2/2] Documentation: ABI: sysfs-class-net-qmi: document tx aggregation files Daniele Palmas
2022-10-19 15:04 ` [PATCH net-next 0/2] net: usb: qmi_wwan implement tx packets aggregation Bjørn Mork
2022-10-19 15:48   ` Greg KH
2022-10-20  0:55     ` Jakub Kicinski
2022-10-19 18:04   ` Daniele Palmas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).