netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Richard Weinberger <richard@nod.at>
To: netdev@vger.kernel.org
Cc: wei.fang@nxp.com, shenwei.wang@nxp.com, xiaoning.wang@nxp.com,
	linux-imx@nxp.com
Subject: fec: high latency with imx8mm compared to imx6q
Date: Fri, 17 Feb 2023 17:52:54 +0100 (CET)	[thread overview]
Message-ID: <1422776754.146013.1676652774408.JavaMail.zimbra@nod.at> (raw)

Hi!

I'm investigating into latency issues on an imx8mm system after
migrating from imx6q.
A regression test showed massive latency increases when single/small packets
are exchanged.

A simple test using ping exhibits the problem.
Pinging the very same host from the imx8mm has a way higher RTT than from the imx6.

Ping, 100 packets each, from imx6q:
rtt min/avg/max/mdev = 0.689/0.851/1.027/0.088 ms

Ping, 100 packets each, from imx8mm:
rtt min/avg/max/mdev = 1.073/2.064/2.189/0.330 ms

You can see that the average RTT has more than doubled.
I see the same results with every imx8mm system I got my hands on so far.
Also the kernel version does not matter, I've tried also the NXP tree without success.

All reported numbers have been produced using vanilla Linux v6.2-rc8 with these boards:
PHYTEC phyBOARD-Mira Quad with an i.MX6Q, silicon rev 1.5
FSL i.MX8MM EVK board with an i.MX8MM, revision 1.0

While digging into the fec ethernet driver I noticed that on the imx8mm sending
packet takes extremely long.

I'm measuring the time between triggering transmission start,
arrival of the transmit done IRQ and NAPI done.
Don't get confused by the function names, gcc inlined like hell.

imx6q:
   tst-104     [003] b..3.   217.340689: fec_enet_start_xmit: START skb: 8a68617d
   tst-104     [003] b..3.   217.340702: fec_enet_start_xmit: DONE skb: 8a68617d
<idle>-0       [000] d.h1.   217.340736: fec_enet_interrupt: 
<idle>-0       [000] d.h1.   217.340739: fec_enet_interrupt: scheduling napi
<idle>-0       [000] ..s1.   217.340774: fec_enet_rx_napi: TX DONE skb: 8a68617d

Time between submit and irq: 34us
Time between submit and tx done: 72us

imx8mm:
   tst-95      [000] b..2.   142.713409: fec_enet_start_xmit: START skb: 00000000ad10a62d
   tst-95      [000] b..2.   142.713417: fec_enet_start_xmit: DONE skb: 00000000ad10a62d
<idle>-0       [000] d.h1.   142.714428: fec_enet_interrupt: 
<idle>-0       [000] d.h1.   142.714430: fec_enet_interrupt: scheduling napi
<idle>-0       [000] ..s1.   142.714451: fec_enet_rx_napi: TX DONE skb: 00000000ad10a62d

Time between submit and irq: 1011us
Time between submit and tx done: 1034us 

As you can see, imx8mm's fec needs more than a whole millisecond to send a single packet.
Please note I'm just talking about latency. Throughput is fine, when the transmitter is
kept busy it seems to be much faster.

Is this a known issue?
Does fec need further tweaking for the imx8mm?
Can it be that the ethernet controller is in a sleep mode and needs to wake up each time?

Thanks,
//richard

My debug patch:
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 2341597408d1..7b0d43d76dea 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -565,6 +565,8 @@ static int fec_enet_txq_submit_skb(struct fec_enet_priv_tx_q *txq,
 	unsigned int index;
 	int entries_free;
 
+	trace_printk("START skb: %p\n", skb);
+
 	entries_free = fec_enet_get_free_txdesc_num(txq);
 	if (entries_free < MAX_SKB_FRAGS + 1) {
 		dev_kfree_skb_any(skb);
@@ -674,6 +676,7 @@ static int fec_enet_txq_submit_skb(struct fec_enet_priv_tx_q *txq,
 
 	/* Trigger transmission start */
 	writel(0, txq->bd.reg_desc_active);
+	trace_printk("DONE skb: %p\n", skb);
 
 	return 0;
 }
@@ -1431,6 +1434,7 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
 		} else {
 			ndev->stats.tx_packets++;
 			ndev->stats.tx_bytes += skb->len;
+			trace_printk("TX DONE skb: %p\n", skb);
 		}
 
 		/* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who
@@ -1809,12 +1813,15 @@ fec_enet_interrupt(int irq, void *dev_id)
 	struct fec_enet_private *fep = netdev_priv(ndev);
 	irqreturn_t ret = IRQ_NONE;
 
+	trace_printk("\n");
+
 	if (fec_enet_collect_events(fep) && fep->link) {
 		ret = IRQ_HANDLED;
 
 		if (napi_schedule_prep(&fep->napi)) {
 			/* Disable interrupts */
 			writel(0, fep->hwp + FEC_IMASK);
+			trace_printk("scheduling napi\n");
 			__napi_schedule(&fep->napi);
 		}
 	}

             reply	other threads:[~2023-02-17 16:53 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-17 16:52 Richard Weinberger [this message]
2023-02-17 20:49 ` high latency with imx8mm compared to imx6q David Laight
2023-02-18  1:04   ` Andrew Lunn
2023-02-18  1:27     ` Wei Fang
2023-02-18  9:42       ` Richard Weinberger
2023-02-18 11:52         ` Wei Fang
2023-02-18 12:03           ` Richard Weinberger
2023-02-18 12:28             ` Wei Fang
2023-02-18 12:29               ` Wei Fang
2023-02-18 13:20               ` Richard Weinberger
2023-02-20  0:11                 ` Andrew Lunn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1422776754.146013.1676652774408.JavaMail.zimbra@nod.at \
    --to=richard@nod.at \
    --cc=linux-imx@nxp.com \
    --cc=netdev@vger.kernel.org \
    --cc=shenwei.wang@nxp.com \
    --cc=wei.fang@nxp.com \
    --cc=xiaoning.wang@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).