linux-wireless.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yibo Zhao <yiboz@codeaurora.org>
To: Kalle Valo <kvalo@codeaurora.org>
Cc: erik.stromdahl@gmail.com, linux-wireless@vger.kernel.org,
	ath10k@lists.infradead.org, linux-wireless-owner@vger.kernel.org
Subject: Re: FW: [PATCH] ath10k: fix return value check in wake_tx_q op
Date: Mon, 04 Mar 2019 09:56:40 +0800	[thread overview]
Message-ID: <4dadbeebe8dc1e911cc4871d767b4ed1@codeaurora.org> (raw)
In-Reply-To: <a1c113622b9aec9b32393c0a3e159e32@codeaurora.org>

在 2019-02-25 12:40,Yibo Zhao 写道:
> 在 2019-02-07 22:25,Kalle Valo 写道:
>> Yibo Zhao <yiboz@codeaurora.org> writes:
>> 
>>> We have met performance issue on our two-core system after applying
>>> your patch. In WDS mode, we found that the peak throughput in TCP-DL
>>> and UDP-DL dropped more than 10% compared with previous one. And in
>>> some cases, though throughput stays the same, one CPU usage rises
>>> about 20% which leads to 10% in total CPU usage. With your change, I
>>> think driver will try its best to push as many packets as it can.
>>> During this time, the driver's queue lock will be held for too much
>>> time in one CPU and as a result, the other CPU will be blocked if it
>>> wants to acquired the same lock. Working in this way seems not
>>> efficiency.
>>> 
>>> So I think it is better to revert the change till we come up with a
>>> new solution.
>> 
>> I don't think reverting is a clear option at this stage because that
>> again creates problems for SDIO. IIRC without this patch SDIO was
>> sending one packet a time (or something like that, can't remember all
>> the details right now).
>> 
> 
> Below is the aqm result after 1 min test with Erik's patch.
> 
> target 19999us interval 99999us ecn yes
> tid ac backlog-bytes backlog-packets new-flows drops marks overlimit
> collisions tx-bytes tx-packets flags
> 0 2 0 0 5342229 0 0 0 0 3867657029 5342229 0x0(RUN)
> 1 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 2 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 3 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 4 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 5 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 6 0 0 0 2 0 0 0 0 144 2 0x0(RUN)
> 7 0 0 0 2 0 0 0 0 282 2 0x0(RUN)
> 8 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 9 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 10 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 11 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 12 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 13 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 14 0 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 15 0 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 
> we see no difference between tx-packets and new-flows.
> Whereas if we revert the patch, we get:
> 
> target 19999us interval 99999us ecn yes
> tid ac backlog-bytes backlog-packets new-flows drops marks overlimit
> collisions tx-bytes tx-packets flags
> 0 2 0 0 2233059 3 0 9236 12 1159661783 6380867 0x0(RUN)
> 1 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 2 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 3 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 4 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 5 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 6 0 0 0 1 0 0 0 0 144 2 0x0(RUN)
> 7 0 0 0 1 0 0 0 0 282 2 0x0(RUN)
> 8 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 9 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 10 3 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 11 2 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 12 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 13 1 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 14 0 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 15 0 0 0 0 0 0 0 0 0 0 0x0(RUN)
> 
> new-flows are roughly one-third of the total tx-packets.
> 
> IMHO, with Erik's change, Erik's change has changed the way fq's
> schedule behavior and it looks like there is no other packets in the
> fq after a packet has been dequeued. And as a result, this flow's
> deficit will be refill and then removed from fq list at once in the
> same CPU. And during this time, the other CPU could be blocked. When
> new packet comes, same thing happens. So we get equal new flows and
> tx-packets.
> 
> Things would be different without Erik's change. After a packet has
> been dequeued, this flow's deficit will not be refill immediately in
> CPU0. It is possible that the deficit to be refilled in CPU1 while at
> the same time CPU0 can fetch data from ethernet. So we can see more
> tx-packets delivered to FW from aqm.
> 
> 
>> Why does this happen only WDS mode? Did you test other modes, like AP 
>> or
>> client mode?
> AP mode has same issue. UDP throughput drops more than 10%. As for
> TCP, CPU usage rising a lot although throughput stays similiar.
> So, I'd say Erik's change does not work for us.

Hi Kalle,

May I have your comments?

-- 
Yibo

  reply	other threads:[~2019-03-04  1:56 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-06 13:25 [PATCH] ath10k: fix return value check in wake_tx_q op Erik Stromdahl
2018-05-12  9:03 ` Kalle Valo
     [not found] ` <6dc00772b826410e930306891fd13ed9@euamsexm01f.eu.qualcomm.com>
     [not found]   ` <c77a1c0961f34ee68b7266444d2207f5@aptaiexm02e.ap.qualcomm.com>
2019-01-28  7:01     ` FW: [PATCH] " yiboz
2019-02-07 14:25       ` Kalle Valo
2019-02-25  4:40         ` Yibo Zhao
2019-03-04  1:56           ` Yibo Zhao [this message]
2019-03-11  6:44             ` Erik Stromdahl
2019-03-12  2:23               ` Yibo Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4dadbeebe8dc1e911cc4871d767b4ed1@codeaurora.org \
    --to=yiboz@codeaurora.org \
    --cc=ath10k@lists.infradead.org \
    --cc=erik.stromdahl@gmail.com \
    --cc=kvalo@codeaurora.org \
    --cc=linux-wireless-owner@vger.kernel.org \
    --cc=linux-wireless@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).