From: "Toke Høiland-Jørgensen" <toke@redhat.com>
To: Felix Fietkau <nbd@nbd.name>, linux-wireless@vger.kernel.org
Cc: johannes@sipsolutions.net
Subject: Re: [PATCH 2/2] mac80211: minstrel_ht: replace rate stats ewma with a better moving average
Date: Sun, 29 Sep 2019 21:46:28 +0200 [thread overview]
Message-ID: <87k19qyjfv.fsf@toke.dk> (raw)
In-Reply-To: <b2566142-a7ea-50e8-e683-a3702b75ea6f@nbd.name>
Felix Fietkau <nbd@nbd.name> writes:
> On 2019-09-29 20:42, Toke Høiland-Jørgensen wrote:
>> Felix Fietkau <nbd@nbd.name> writes:
>>
>>> Rate success probability usually fluctuates a lot under normal conditions.
>>> With a simple EWMA, noise and fluctuation can be reduced by increasing the
>>> window length, but that comes at the cost of introducing lag on sudden
>>> changes.
>>>
>>> This change replaces the EWMA implementation with a moving average that's
>>> designed to significantly reduce lag while keeping a bigger window size
>>> by being better at filtering out noise.
>>>
>>> It is only slightly more expensive than the simple EWMA and still avoids
>>> divisions in its calculation.
>>>
>>> The algorithm is adapted from an implementation intended for a completely
>>> different field (stock market trading), where the tradeoff of lag vs
>>> noise filtering is equally important. It is based on the "smoothing filter"
>>> from http://www.stockspotter.com/files/PredictiveIndicators.pdf.
>>>
>>> I have adapted it to fixed-point math with some constants so that it uses
>>> only addition, bit shifts and multiplication
>>>
>>> To better make use of the filtering and bigger window size, the update
>>> interval time is cut in half.
>>>
>>> For testing, the algorithm can be reverted to the older one via
>>> debugfs
>>
>> This looks interesting! Do you have any performance numbers from your
>> own testing to share? :)
> To show the difference, I also generated some random data, ran it
> through minstrel's EWMA and the new code and made a plot:
> http://nbd.name/ewma-filter-plot.png
Oh, wow, yeah, that looks way more responsive...
> The real world test that I did was using mt76x2:
> I ran 3 iperf TCP streams from an AP to a station in a cable setup with
> an attenuator.
> I switched from 70 dB attenuation to 40 dB and measured the time it
> takes for TCP throughput to stabilize at a higher rate.
> Without my changes it takes about 5-6 seconds, with my changes it's only
> 2-3 seconds.
Very cool. Thanks!
-Toke
next prev parent reply other threads:[~2019-09-29 19:46 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-29 15:46 [PATCH 1/2] mac80211: minstrel: remove divisions in tx status path Felix Fietkau
2019-09-29 15:46 ` [PATCH 2/2] mac80211: minstrel_ht: replace rate stats ewma with a better moving average Felix Fietkau
2019-09-29 18:42 ` Toke Høiland-Jørgensen
2019-09-29 19:18 ` Felix Fietkau
2019-09-29 19:46 ` Toke Høiland-Jørgensen [this message]
2019-10-01 10:17 ` Johannes Berg
2019-10-01 10:52 ` Felix Fietkau
2019-10-01 11:06 ` Johannes Berg
2019-10-01 11:11 ` Johannes Berg
2019-10-08 9:18 ` Koen Vandeputte
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87k19qyjfv.fsf@toke.dk \
--to=toke@redhat.com \
--cc=johannes@sipsolutions.net \
--cc=linux-wireless@vger.kernel.org \
--cc=nbd@nbd.name \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).