linux-wireless.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* ath5k: Weird Retransmission Behaviour
@ 2010-12-06  6:30 Jonathan Guerin
  2010-12-06  8:14 ` Bruno Randolf
  2010-12-06  9:38 ` Nick Kossifidis
  0 siblings, 2 replies; 27+ messages in thread
From: Jonathan Guerin @ 2010-12-06  6:30 UTC (permalink / raw)
  To: linux-wireless, ath5k-devel, Bruno Randolf

Hi,


I've been doing some investigation into the behaviour of contention
windows and retransmissions.

Firstly, I'll just describe the test scenario and setup that I have. I
have 3 Via x86 nodes with Atheros AR5001X+ cards. They are tethered to
each other via coaxial cables, into splitters. They have 20dB of fixed
attenuation applied to each antenna output, plus a programmable
variable attenuator on each link. One node acts as a sender, one as a
receiver, and one simply runs a monitor-mode interface to capture
packet traces. All 3 are running kernel version 2.6.37-rc2. The sender
and receiver are configured as IBSS stations and are tuned to 5.18
GHz.

Here's a really dodgy ASCII diagram of the setup:

S-----[variable attenuator]-----R
|				          |
|					  |
|				          |
+------------M-------------------------+

where S is the Sender node, R is the Receiver node and M is the
Monitoring capture node.


Secondly, I have written a program which will parse a captured pcap
file from the Monitoring station. It looks for 'chains' of frames with
the same sequence number, and where the first frame has the Retry bit
set to false in the header and all following have it set to true. Any
deviation from this, and the program drops the current chain without
including it in its stats, and looks for the next chain matching these
requirements. It averages the amount of time per transmission number
(i.e. the average of all transmissions which were the first, second,
third etc. for a unique sequence number). The transmission time of a
frame is the amount of time between the end of the frame and the end
of the previous. It tracks these 'chains' of frames with the same
sequence number. It considers the last transmission number in each
chain as the 'final' transmission.

Finally, the link is loaded using a saturated UDP flow, and the data
rate is fixed to 54M and 36M. This is specified in the output. The
output is attached below.

The output describes the fixed link data rate, the variable
attenuator's value, the delivery ratio, and the number of transmitted
packets/s. I've added a discussion per result set. Each line outputs
the transmission number, the average transmission time for this
number, the total number of transmissions, the number of frames which
ended their transmissions at this number (i.e. where the chain ended
its final transmission - this is equivalent to the retransmission
value from the Radiotap header + 1), and the average expected
transmission time for all that particular transmission number in all
chains. This is calculated using the airtime calculations from the
802.11a standard, with the receipt of an ACK frame, as well as a SIFS
(16us), which is 28us. If the transmission did not receive an ACK, a
normal ACK timeout is 50 us, but ath5k appears to have this set to 25
us, so the value shouldn't be too far out what to expect.

The header to each result refers to the rate it was fixed at, as well
as the variable attenuation being added to it. The link also has a
fixed 40dB of attenuation both to protect the cards, as well as give
the necessary range for the variable attenuator to control link
quality.

==> iperf_33M_rate_36M_att_1dB.pcap.txt <== (good link, 100% delivery)
Average time per TX No:
TXNo	Avg			No		Final    	ExpectedAvg
1		477.604980	10463	10462   	509
Overall average: 477.604980

[Discussion:] Nothing, appears normal.


==> iperf_33M_rate_36M_att_18dB.pcap.txt <== (lossy link, but still
100% delivery)
Average time per TX No:
TXNo	Avg			No		Final   	ExpectedAvg
1		476.966766	9808		8138   	509
2		550.320496	1663		1403   	581
3		697.552917	255		218   	725
4		1028.756714	37		30		1013
5		1603.428589	7		7   		1589
Overall average: 494.514618

[Discussion:] Nothing, appears normal. Contention window appears to
double normally.

==> iperf_33M_rate_36M_att_19dB.pcap.txt <== (lossy link, but still
100% delivery)
Average time per TX No:
TXNo	Avg			No		Final   	ExpectedAvg
1		477.510437	14893	8653   	509
2		546.149048	6205		3624   	581
3		692.270203	2561		1552   	725
4		980.565857	1002		596   	1013
5		1542.079956	400		252   	1589
6		2758.693848	147		89		2741
7		4971.500000	56		32   		5045
8	 	4689.043457	23		15   		5045
9		4487.856934	7		3   		5045
10		442.250000	4		3   		5045
11		488.000000	1		1   		5045
Overall average: 580.976807

[Discussion:] Contention window appears to double until a plateau from
7 through 9. Weirdly, the contention window appears to be drop again
from 10, but
there are too few frames to draw a conclusion.

==> iperf_33M_rate_36M_att_21dB.pcap.txt <== (lossy link, < 1% delivery)
TXNo	Avg			No	     	Final   ExpectedAvg
1		485.390198	1940		3	   509
2		479.113434	1922		2	   581
3		479.681824	1914		0   	   725
4		485.083038	1903		1   	   1013
5		492.088135	1895		4   	   1589
6		508.322510	1876		1   	   2741
7		524.697876	1870		1   	   5045
8		543.054382	1857		0   	   5045
9		522.970703	1842		0   	   5045
10		478.204132	1837		0   	   5045
11		476.520782	1828		0   	   5045
12		477.531342	1818		0   	   5045
13		476.743652	1810		0   	   5045
14		478.936554	1797		0   	   5045
15		480.699097	1788		0   	   5045
16		482.734314	1784		0   	   5045
17		491.608459	1775		0   	   5045
18		497.458984	1767		1   	   5045
19		495.067932	1752		7   	   5045
20		478.102417	1738		295     5045
21		475.128845	1436		1402   5045
22		492.692322	26		0	   5045
23		471.576935	26		0   	   5045
24		466.884613	26		0   	   5045
25		476.269226	26		0   	   5045
26		462.192322	26		0   	   5045
27		480.961548	26		1   	   5045
28		463.600006	25		24   	   5045
Overall average: 491.068359

[Discussion:] Contention does not appear to increase, and the number
of transmission per frame is very large. This behaviour is replicated
with the 54M scenario when a link is extremely lossy.

==> iperf_33M_rate_54M_att_1dB.pcap.txt <== (good link, 2400 packets/s)
Average time per TX No:
TXNo	Avg			No		Final   	ExpectedAverage
1		365.551849	23957	23935   	393
2		409.571442	21		21   		465
Overall average: 365.590424

[Discussion: ] Appears relatively normal.

==> iperf_33M_rate_54M_att_10dB.pcap.txt <== (lossy link, but still
100% delivery, 1500 packets/s)
Average time per TX No:
TXNo	Avg			No		Final   	ExpectedAverage
1		364.501190	10134	5915   	393
2		434.138000	4196		2461   	465
3		579.482300	1721		1036   	609
4		837.005859	682		397   	897
5		1365.279175	283		155   	1473
6		2572.007812	128		81 	  	2625
7		4905.195801	46		27   		4929
8		4985.947266	19		12   		4929
9		4627.285645	7		4   		4929
10		366.000000	3		1   		4929
11		335.500000	2		2   		4929
Overall average: 473.477020

[Discussion: ] Appears fine, until transmission 10, which appears to
drop the contention window back to an equivalent first transmission
value, but not enough frames at this point to draw a conclusion.

==> iperf_33M_rate_54M_att_11dB.pcap.txt <== (lossy link, but still
100% delivery, 680 packets/s)
Average time per TX No:
TXNo	Avg			No		Final   	ExpectedAverage
1		362.082825	2149		539   	393
2		434.672485	1606		368   	465
3		582.795288	1231		307   	609
4		820.347107	919		237   	897
5		1424.753296	673		194   	1473
6		2626.403320	466		143   	2625
7		4734.233887	308		83   		4929
8		4830.244141	217		65   		4929
9		4449.702637	148		33   		4929
10		360.114044	114		36   		4929
11		366.000000	78		20   		4929
12		460.655182	58		20   		4929
13		544.184204	38		9   		4929
14		893.965515	29		7   		4929
15		1361.409058	22		8   		4929
16		2675.285645	14		2   		4929
17		4239.500000	12		5   		4929
18		3198.142822	7		2   		4929
19		5111.799805	5		3   		4929
20		1403.000000	2		1   		4929
Overall average: 1063.129883

[Discussion: ] Everything appears fine until, once again, transmission
10, when the contention windows appears to 'restart' - it climbs
steadily until 17. After this point, there are not enough frames to
draw any conclusions.

==> iperf_33M_rate_54M_att_12dB.pcap.txt <== (lossy link, 6% delivery,
400 packets/s)
Average time per TX No:
TXNo	Avg			No		Final   	ExpectedAvg
1		360.460724	4482		14   		393
2		366.068481	4453		16   		465
3		360.871735	4413		13   		609
4		361.535553	4386		18   		897
5		367.526062	4357		60   		1473
6		360.003967	4283		3839   	2625
7		361.778046	419		416   	4929
Overall average: 362.732910

[Discussion:] This exhibits the same problem as the extremely lossy
36M link - the contention window does not appear to rise. Even with
enough frames to draw a good conclusion at transmission 6, the
transmission time average (360) is way below the expected average
(2625).
==> END OF OUTPUT <==

The question here is: why does ath5k/mac80211 send out so many
transmissions, and why does it vary so much based on link quality?
Additionally, why does it appear to 'reset' the contention window
after 9 retransmissions of a frame?

Cheers,

Jonathan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-06  6:30 ath5k: Weird Retransmission Behaviour Jonathan Guerin
@ 2010-12-06  8:14 ` Bruno Randolf
  2010-12-06  9:36   ` [ath5k-devel] " Nick Kossifidis
                     ` (3 more replies)
  2010-12-06  9:38 ` Nick Kossifidis
  1 sibling, 4 replies; 27+ messages in thread
From: Bruno Randolf @ 2010-12-06  8:14 UTC (permalink / raw)
  To: Jonathan Guerin; +Cc: linux-wireless, ath5k-devel, nbd

On Mon December 6 2010 15:30:00 Jonathan Guerin wrote:
> Hi,
> 
> 
> I've been doing some investigation into the behaviour of contention
> windows and retransmissions.
> 
> Firstly, I'll just describe the test scenario and setup that I have. I
> have 3 Via x86 nodes with Atheros AR5001X+ cards. They are tethered to
> each other via coaxial cables, into splitters. They have 20dB of fixed
> attenuation applied to each antenna output, plus a programmable
> variable attenuator on each link. One node acts as a sender, one as a
> receiver, and one simply runs a monitor-mode interface to capture
> packet traces. All 3 are running kernel version 2.6.37-rc2. The sender
> and receiver are configured as IBSS stations and are tuned to 5.18
> GHz.
> 
> Here's a really dodgy ASCII diagram of the setup:
> 
> S-----[variable attenuator]-----R
> 
> 
> 
> +------------M-------------------------+
> 
> where S is the Sender node, R is the Receiver node and M is the
> Monitoring capture node.
> 
> 
> Secondly, I have written a program which will parse a captured pcap
> file from the Monitoring station. It looks for 'chains' of frames with
> the same sequence number, and where the first frame has the Retry bit
> set to false in the header and all following have it set to true. Any
> deviation from this, and the program drops the current chain without
> including it in its stats, and looks for the next chain matching these
> requirements. It averages the amount of time per transmission number
> (i.e. the average of all transmissions which were the first, second,
> third etc. for a unique sequence number). The transmission time of a
> frame is the amount of time between the end of the frame and the end
> of the previous. It tracks these 'chains' of frames with the same
> sequence number. It considers the last transmission number in each
> chain as the 'final' transmission.
> 
> Finally, the link is loaded using a saturated UDP flow, and the data
> rate is fixed to 54M and 36M. This is specified in the output. The
> output is attached below.
> 
> The output describes the fixed link data rate, the variable
> attenuator's value, the delivery ratio, and the number of transmitted
> packets/s. I've added a discussion per result set. Each line outputs
> the transmission number, the average transmission time for this
> number, the total number of transmissions, the number of frames which
> ended their transmissions at this number (i.e. where the chain ended
> its final transmission - this is equivalent to the retransmission
> value from the Radiotap header + 1), and the average expected
> transmission time for all that particular transmission number in all
> chains. This is calculated using the airtime calculations from the
> 802.11a standard, with the receipt of an ACK frame, as well as a SIFS
> (16us), which is 28us. If the transmission did not receive an ACK, a
> normal ACK timeout is 50 us, but ath5k appears to have this set to 25
> us, so the value shouldn't be too far out what to expect.
> 
> The header to each result refers to the rate it was fixed at, as well
> as the variable attenuation being added to it. The link also has a
> fixed 40dB of attenuation both to protect the cards, as well as give
> the necessary range for the variable attenuator to control link
> quality.
> 
> ==> iperf_33M_rate_36M_att_1dB.pcap.txt <== (good link, 100% delivery)
> Average time per TX No:
> TXNo	Avg			No		Final    	ExpectedAvg
> 1		477.604980	10463	10462   	509
> Overall average: 477.604980
> 
> [Discussion:] Nothing, appears normal.
> 
> 
> ==> iperf_33M_rate_36M_att_18dB.pcap.txt <== (lossy link, but still
> 100% delivery)
> Average time per TX No:
> TXNo	Avg			No		Final   	ExpectedAvg
> 1		476.966766	9808		8138   	509
> 2		550.320496	1663		1403   	581
> 3		697.552917	255		218   	725
> 4		1028.756714	37		30		1013
> 5		1603.428589	7		7   		1589
> Overall average: 494.514618
> 
> [Discussion:] Nothing, appears normal. Contention window appears to
> double normally.
> 
> ==> iperf_33M_rate_36M_att_19dB.pcap.txt <== (lossy link, but still
> 100% delivery)
> Average time per TX No:
> TXNo	Avg			No		Final   	ExpectedAvg
> 1		477.510437	14893	8653   	509
> 2		546.149048	6205		3624   	581
> 3		692.270203	2561		1552   	725
> 4		980.565857	1002		596   	1013
> 5		1542.079956	400		252   	1589
> 6		2758.693848	147		89		2741
> 7		4971.500000	56		32   		5045
> 8	 	4689.043457	23		15   		5045
> 9		4487.856934	7		3   		5045
> 10		442.250000	4		3   		5045
> 11		488.000000	1		1   		5045
> Overall average: 580.976807
> 
> [Discussion:] Contention window appears to double until a plateau from
> 7 through 9. Weirdly, the contention window appears to be drop again
> from 10, but
> there are too few frames to draw a conclusion.
> 
> ==> iperf_33M_rate_36M_att_21dB.pcap.txt <== (lossy link, < 1% delivery)
> TXNo	Avg			No	     	Final   ExpectedAvg
> 1		485.390198	1940		3	   509
> 2		479.113434	1922		2	   581
> 3		479.681824	1914		0   	   725
> 4		485.083038	1903		1   	   1013
> 5		492.088135	1895		4   	   1589
> 6		508.322510	1876		1   	   2741
> 7		524.697876	1870		1   	   5045
> 8		543.054382	1857		0   	   5045
> 9		522.970703	1842		0   	   5045
> 10		478.204132	1837		0   	   5045
> 11		476.520782	1828		0   	   5045
> 12		477.531342	1818		0   	   5045
> 13		476.743652	1810		0   	   5045
> 14		478.936554	1797		0   	   5045
> 15		480.699097	1788		0   	   5045
> 16		482.734314	1784		0   	   5045
> 17		491.608459	1775		0   	   5045
> 18		497.458984	1767		1   	   5045
> 19		495.067932	1752		7   	   5045
> 20		478.102417	1738		295     5045
> 21		475.128845	1436		1402   5045
> 22		492.692322	26		0	   5045
> 23		471.576935	26		0   	   5045
> 24		466.884613	26		0   	   5045
> 25		476.269226	26		0   	   5045
> 26		462.192322	26		0   	   5045
> 27		480.961548	26		1   	   5045
> 28		463.600006	25		24   	   5045
> Overall average: 491.068359
> 
> [Discussion:] Contention does not appear to increase, and the number
> of transmission per frame is very large. This behaviour is replicated
> with the 54M scenario when a link is extremely lossy.
> 
> ==> iperf_33M_rate_54M_att_1dB.pcap.txt <== (good link, 2400 packets/s)
> Average time per TX No:
> TXNo	Avg			No		Final   	ExpectedAverage
> 1		365.551849	23957	23935   	393
> 2		409.571442	21		21   		465
> Overall average: 365.590424
> 
> [Discussion: ] Appears relatively normal.
> 
> ==> iperf_33M_rate_54M_att_10dB.pcap.txt <== (lossy link, but still
> 100% delivery, 1500 packets/s)
> Average time per TX No:
> TXNo	Avg			No		Final   	ExpectedAverage
> 1		364.501190	10134	5915   	393
> 2		434.138000	4196		2461   	465
> 3		579.482300	1721		1036   	609
> 4		837.005859	682		397   	897
> 5		1365.279175	283		155   	1473
> 6		2572.007812	128		81 	  	2625
> 7		4905.195801	46		27   		4929
> 8		4985.947266	19		12   		4929
> 9		4627.285645	7		4   		4929
> 10		366.000000	3		1   		4929
> 11		335.500000	2		2   		4929
> Overall average: 473.477020
> 
> [Discussion: ] Appears fine, until transmission 10, which appears to
> drop the contention window back to an equivalent first transmission
> value, but not enough frames at this point to draw a conclusion.
> 
> ==> iperf_33M_rate_54M_att_11dB.pcap.txt <== (lossy link, but still
> 100% delivery, 680 packets/s)
> Average time per TX No:
> TXNo	Avg			No		Final   	ExpectedAverage
> 1		362.082825	2149		539   	393
> 2		434.672485	1606		368   	465
> 3		582.795288	1231		307   	609
> 4		820.347107	919		237   	897
> 5		1424.753296	673		194   	1473
> 6		2626.403320	466		143   	2625
> 7		4734.233887	308		83   		4929
> 8		4830.244141	217		65   		4929
> 9		4449.702637	148		33   		4929
> 10		360.114044	114		36   		4929
> 11		366.000000	78		20   		4929
> 12		460.655182	58		20   		4929
> 13		544.184204	38		9   		4929
> 14		893.965515	29		7   		4929
> 15		1361.409058	22		8   		4929
> 16		2675.285645	14		2   		4929
> 17		4239.500000	12		5   		4929
> 18		3198.142822	7		2   		4929
> 19		5111.799805	5		3   		4929
> 20		1403.000000	2		1   		4929
> Overall average: 1063.129883
> 
> [Discussion: ] Everything appears fine until, once again, transmission
> 10, when the contention windows appears to 'restart' - it climbs
> steadily until 17. After this point, there are not enough frames to
> draw any conclusions.
> 
> ==> iperf_33M_rate_54M_att_12dB.pcap.txt <== (lossy link, 6% delivery,
> 400 packets/s)
> Average time per TX No:
> TXNo	Avg			No		Final   	ExpectedAvg
> 1		360.460724	4482		14   		393
> 2		366.068481	4453		16   		465
> 3		360.871735	4413		13   		609
> 4		361.535553	4386		18   		897
> 5		367.526062	4357		60   		1473
> 6		360.003967	4283		3839   	2625
> 7		361.778046	419		416   	4929
> Overall average: 362.732910
> 
> [Discussion:] This exhibits the same problem as the extremely lossy
> 36M link - the contention window does not appear to rise. Even with
> enough frames to draw a good conclusion at transmission 6, the
> transmission time average (360) is way below the expected average
> (2625).
> ==> END OF OUTPUT <==
> 
> The question here is: why does ath5k/mac80211 send out so many
> transmissions, and why does it vary so much based on link quality?
> Additionally, why does it appear to 'reset' the contention window
> after 9 retransmissions of a frame?
> 
> Cheers,
> 
> Jonathan

Hi Jonathan!

This is a very interesting setup and test. I guess nobody has looked so 
closely yet... I think this is not necessarily ath5k related, but may be a bug 
of mac80211 or minstrel, but not sure yet, of course...

It's normal, that the CW is reset after the retry limits are reached, this is 
what the standard says:

"The CW shall be reset to aCWmin after every successful attempt to transmit an 
MPDU or MMPDU, when SLRC reaches dot11LongRetryLimit, or when SSRC reaches 
dot11ShortRetryLimit." (802.11-2007 p261)

But it seems weird that there are so many retransmissions. The default maximum 
numbers of retransmissions should be 7 for short frames and 4 for long frames 
(dot11[Short|Long]RetryLimit), and this is what is set as defaults in mac80211 
(local->hw.conf.short_frame_max_tx_count). Seems we are getting many 
retransmissions from minstel, i added some debug prints:

*** txdesc tries 3
*** mrr 0 tries 3 rate 11
*** mrr 1 tries 3 rate 11
*** mrr 2 tries 3 rate 11

This seems to be the normal case and that would already result in 12 
transmissions.

Another thing that strikes me here is: why use multi rate retries if the rate 
is all the same? (Ignore the actual value of the rate, this is the HW rate 
code).

Other examples: 

*** txdesc tries 2
*** mrr 0 tries 9 rate 12
*** mrr 1 tries 2 rate 13
*** mrr 2 tries 3 rate 11

= 16 transmissions in sum.

*** txdesc tries 9
*** mrr 0 tries 3 rate 11
*** mrr 1 tries 9 rate 8
*** mrr 2 tries 3 rate 11

= 24 transmissions in sum. Again, rate[1] and rate[3] are the same, so why 
bother setting it up twice?

bruno

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [ath5k-devel] ath5k: Weird Retransmission Behaviour
  2010-12-06  8:14 ` Bruno Randolf
@ 2010-12-06  9:36   ` Nick Kossifidis
  2010-12-06 10:53     ` Sedat Dilek
  2010-12-07  1:17     ` Jonathan Guerin
  2010-12-06 18:01   ` Björn Smedman
                     ` (2 subsequent siblings)
  3 siblings, 2 replies; 27+ messages in thread
From: Nick Kossifidis @ 2010-12-06  9:36 UTC (permalink / raw)
  To: Bruno Randolf; +Cc: Jonathan Guerin, ath5k-devel, linux-wireless

MjAxMC8xMi82IEJydW5vIFJhbmRvbGYgPGJyMUBlaW5mYWNoLm9yZz46Cj4gT24gTW9uIERlY2Vt
YmVyIDYgMjAxMCAxNTozMDowMCBKb25hdGhhbiBHdWVyaW4gd3JvdGU6Cj4+IEhpLAo+Pgo+Pgo+
PiBJJ3ZlIGJlZW4gZG9pbmcgc29tZSBpbnZlc3RpZ2F0aW9uIGludG8gdGhlIGJlaGF2aW91ciBv
ZiBjb250ZW50aW9uCj4+IHdpbmRvd3MgYW5kIHJldHJhbnNtaXNzaW9ucy4KPj4KPj4gRmlyc3Rs
eSwgSSdsbCBqdXN0IGRlc2NyaWJlIHRoZSB0ZXN0IHNjZW5hcmlvIGFuZCBzZXR1cCB0aGF0IEkg
aGF2ZS4gSQo+PiBoYXZlIDMgVmlhIHg4NiBub2RlcyB3aXRoIEF0aGVyb3MgQVI1MDAxWCsgY2Fy
ZHMuIFRoZXkgYXJlIHRldGhlcmVkIHRvCj4+IGVhY2ggb3RoZXIgdmlhIGNvYXhpYWwgY2FibGVz
LCBpbnRvIHNwbGl0dGVycy4gVGhleSBoYXZlIDIwZEIgb2YgZml4ZWQKPj4gYXR0ZW51YXRpb24g
YXBwbGllZCB0byBlYWNoIGFudGVubmEgb3V0cHV0LCBwbHVzIGEgcHJvZ3JhbW1hYmxlCj4+IHZh
cmlhYmxlIGF0dGVudWF0b3Igb24gZWFjaCBsaW5rLiBPbmUgbm9kZSBhY3RzIGFzIGEgc2VuZGVy
LCBvbmUgYXMgYQo+PiByZWNlaXZlciwgYW5kIG9uZSBzaW1wbHkgcnVucyBhIG1vbml0b3ItbW9k
ZSBpbnRlcmZhY2UgdG8gY2FwdHVyZQo+PiBwYWNrZXQgdHJhY2VzLiBBbGwgMyBhcmUgcnVubmlu
ZyBrZXJuZWwgdmVyc2lvbiAyLjYuMzctcmMyLiBUaGUgc2VuZGVyCj4+IGFuZCByZWNlaXZlciBh
cmUgY29uZmlndXJlZCBhcyBJQlNTIHN0YXRpb25zIGFuZCBhcmUgdHVuZWQgdG8gNS4xOAo+PiBH
SHouCj4+Cj4+IEhlcmUncyBhIHJlYWxseSBkb2RneSBBU0NJSSBkaWFncmFtIG9mIHRoZSBzZXR1
cDoKPj4KPj4gUy0tLS0tW3ZhcmlhYmxlIGF0dGVudWF0b3JdLS0tLS1SCj4+Cj4+Cj4+Cj4+ICst
LS0tLS0tLS0tLS1NLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSsKPj4KPj4gd2hlcmUgUyBpcyB0
aGUgU2VuZGVyIG5vZGUsIFIgaXMgdGhlIFJlY2VpdmVyIG5vZGUgYW5kIE0gaXMgdGhlCj4+IE1v
bml0b3JpbmcgY2FwdHVyZSBub2RlLgo+Pgo+Pgo+PiBTZWNvbmRseSwgSSBoYXZlIHdyaXR0ZW4g
YSBwcm9ncmFtIHdoaWNoIHdpbGwgcGFyc2UgYSBjYXB0dXJlZCBwY2FwCj4+IGZpbGUgZnJvbSB0
aGUgTW9uaXRvcmluZyBzdGF0aW9uLiBJdCBsb29rcyBmb3IgJ2NoYWlucycgb2YgZnJhbWVzIHdp
dGgKPj4gdGhlIHNhbWUgc2VxdWVuY2UgbnVtYmVyLCBhbmQgd2hlcmUgdGhlIGZpcnN0IGZyYW1l
IGhhcyB0aGUgUmV0cnkgYml0Cj4+IHNldCB0byBmYWxzZSBpbiB0aGUgaGVhZGVyIGFuZCBhbGwg
Zm9sbG93aW5nIGhhdmUgaXQgc2V0IHRvIHRydWUuIEFueQo+PiBkZXZpYXRpb24gZnJvbSB0aGlz
LCBhbmQgdGhlIHByb2dyYW0gZHJvcHMgdGhlIGN1cnJlbnQgY2hhaW4gd2l0aG91dAo+PiBpbmNs
dWRpbmcgaXQgaW4gaXRzIHN0YXRzLCBhbmQgbG9va3MgZm9yIHRoZSBuZXh0IGNoYWluIG1hdGNo
aW5nIHRoZXNlCj4+IHJlcXVpcmVtZW50cy4gSXQgYXZlcmFnZXMgdGhlIGFtb3VudCBvZiB0aW1l
IHBlciB0cmFuc21pc3Npb24gbnVtYmVyCj4+IChpLmUuIHRoZSBhdmVyYWdlIG9mIGFsbCB0cmFu
c21pc3Npb25zIHdoaWNoIHdlcmUgdGhlIGZpcnN0LCBzZWNvbmQsCj4+IHRoaXJkIGV0Yy4gZm9y
IGEgdW5pcXVlIHNlcXVlbmNlIG51bWJlcikuIFRoZSB0cmFuc21pc3Npb24gdGltZSBvZiBhCj4+
IGZyYW1lIGlzIHRoZSBhbW91bnQgb2YgdGltZSBiZXR3ZWVuIHRoZSBlbmQgb2YgdGhlIGZyYW1l
IGFuZCB0aGUgZW5kCj4+IG9mIHRoZSBwcmV2aW91cy4gSXQgdHJhY2tzIHRoZXNlICdjaGFpbnMn
IG9mIGZyYW1lcyB3aXRoIHRoZSBzYW1lCj4+IHNlcXVlbmNlIG51bWJlci4gSXQgY29uc2lkZXJz
IHRoZSBsYXN0IHRyYW5zbWlzc2lvbiBudW1iZXIgaW4gZWFjaAo+PiBjaGFpbiBhcyB0aGUgJ2Zp
bmFsJyB0cmFuc21pc3Npb24uCj4+Cj4+IEZpbmFsbHksIHRoZSBsaW5rIGlzIGxvYWRlZCB1c2lu
ZyBhIHNhdHVyYXRlZCBVRFAgZmxvdywgYW5kIHRoZSBkYXRhCj4+IHJhdGUgaXMgZml4ZWQgdG8g
NTRNIGFuZCAzNk0uIFRoaXMgaXMgc3BlY2lmaWVkIGluIHRoZSBvdXRwdXQuIFRoZQo+PiBvdXRw
dXQgaXMgYXR0YWNoZWQgYmVsb3cuCj4+Cj4+IFRoZSBvdXRwdXQgZGVzY3JpYmVzIHRoZSBmaXhl
ZCBsaW5rIGRhdGEgcmF0ZSwgdGhlIHZhcmlhYmxlCj4+IGF0dGVudWF0b3IncyB2YWx1ZSwgdGhl
IGRlbGl2ZXJ5IHJhdGlvLCBhbmQgdGhlIG51bWJlciBvZiB0cmFuc21pdHRlZAo+PiBwYWNrZXRz
L3MuIEkndmUgYWRkZWQgYSBkaXNjdXNzaW9uIHBlciByZXN1bHQgc2V0LiBFYWNoIGxpbmUgb3V0
cHV0cwo+PiB0aGUgdHJhbnNtaXNzaW9uIG51bWJlciwgdGhlIGF2ZXJhZ2UgdHJhbnNtaXNzaW9u
IHRpbWUgZm9yIHRoaXMKPj4gbnVtYmVyLCB0aGUgdG90YWwgbnVtYmVyIG9mIHRyYW5zbWlzc2lv
bnMsIHRoZSBudW1iZXIgb2YgZnJhbWVzIHdoaWNoCj4+IGVuZGVkIHRoZWlyIHRyYW5zbWlzc2lv
bnMgYXQgdGhpcyBudW1iZXIgKGkuZS4gd2hlcmUgdGhlIGNoYWluIGVuZGVkCj4+IGl0cyBmaW5h
bCB0cmFuc21pc3Npb24gLSB0aGlzIGlzIGVxdWl2YWxlbnQgdG8gdGhlIHJldHJhbnNtaXNzaW9u
Cj4+IHZhbHVlIGZyb20gdGhlIFJhZGlvdGFwIGhlYWRlciArIDEpLCBhbmQgdGhlIGF2ZXJhZ2Ug
ZXhwZWN0ZWQKPj4gdHJhbnNtaXNzaW9uIHRpbWUgZm9yIGFsbCB0aGF0IHBhcnRpY3VsYXIgdHJh
bnNtaXNzaW9uIG51bWJlciBpbiBhbGwKPj4gY2hhaW5zLiBUaGlzIGlzIGNhbGN1bGF0ZWQgdXNp
bmcgdGhlIGFpcnRpbWUgY2FsY3VsYXRpb25zIGZyb20gdGhlCj4+IDgwMi4xMWEgc3RhbmRhcmQs
IHdpdGggdGhlIHJlY2VpcHQgb2YgYW4gQUNLIGZyYW1lLCBhcyB3ZWxsIGFzIGEgU0lGUwo+PiAo
MTZ1cyksIHdoaWNoIGlzIDI4dXMuIElmIHRoZSB0cmFuc21pc3Npb24gZGlkIG5vdCByZWNlaXZl
IGFuIEFDSywgYQo+PiBub3JtYWwgQUNLIHRpbWVvdXQgaXMgNTAgdXMsIGJ1dCBhdGg1ayBhcHBl
YXJzIHRvIGhhdmUgdGhpcyBzZXQgdG8gMjUKPj4gdXMsIHNvIHRoZSB2YWx1ZSBzaG91bGRuJ3Qg
YmUgdG9vIGZhciBvdXQgd2hhdCB0byBleHBlY3QuCj4+Cj4+IFRoZSBoZWFkZXIgdG8gZWFjaCBy
ZXN1bHQgcmVmZXJzIHRvIHRoZSByYXRlIGl0IHdhcyBmaXhlZCBhdCwgYXMgd2VsbAo+PiBhcyB0
aGUgdmFyaWFibGUgYXR0ZW51YXRpb24gYmVpbmcgYWRkZWQgdG8gaXQuIFRoZSBsaW5rIGFsc28g
aGFzIGEKPj4gZml4ZWQgNDBkQiBvZiBhdHRlbnVhdGlvbiBib3RoIHRvIHByb3RlY3QgdGhlIGNh
cmRzLCBhcyB3ZWxsIGFzIGdpdmUKPj4gdGhlIG5lY2Vzc2FyeSByYW5nZSBmb3IgdGhlIHZhcmlh
YmxlIGF0dGVudWF0b3IgdG8gY29udHJvbCBsaW5rCj4+IHF1YWxpdHkuCj4+Cj4+ID09PiBpcGVy
Zl8zM01fcmF0ZV8zNk1fYXR0XzFkQi5wY2FwLnR4dCA8PT0gKGdvb2QgbGluaywgMTAwJSBkZWxp
dmVyeSkKPj4gQXZlcmFnZSB0aW1lIHBlciBUWCBObzoKPj4gVFhObyDCoEF2ZyDCoCDCoCDCoCDC
oCDCoCDCoCDCoCDCoCDCoCDCoCBObyDCoCDCoCDCoCDCoCDCoCDCoCDCoEZpbmFsIMKgIMKgIMKg
IMKgIMKgIEV4cGVjdGVkQXZnCj4+IDEgwqAgwqAgwqAgwqAgwqAgwqAgNDc3LjYwNDk4MCDCoCDC
oCDCoDEwNDYzIMKgIDEwNDYyIMKgIMKgIMKgIMKgIMKgIDUwOQo+PiBPdmVyYWxsIGF2ZXJhZ2U6
IDQ3Ny42MDQ5ODAKPj4KPj4gW0Rpc2N1c3Npb246XSBOb3RoaW5nLCBhcHBlYXJzIG5vcm1hbC4K
Pj4KPj4KPj4gPT0+IGlwZXJmXzMzTV9yYXRlXzM2TV9hdHRfMThkQi5wY2FwLnR4dCA8PT0gKGxv
c3N5IGxpbmssIGJ1dCBzdGlsbAo+PiAxMDAlIGRlbGl2ZXJ5KQo+PiBBdmVyYWdlIHRpbWUgcGVy
IFRYIE5vOgo+PiBUWE5vIMKgQXZnIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIE5vIMKg
IMKgIMKgIMKgIMKgIMKgIMKgRmluYWwgwqAgwqAgwqAgwqAgwqAgRXhwZWN0ZWRBdmcKPj4gMSDC
oCDCoCDCoCDCoCDCoCDCoCA0NzYuOTY2NzY2IMKgIMKgIMKgOTgwOCDCoCDCoCDCoCDCoCDCoCDC
oDgxMzggwqAgwqA1MDkKPj4gMiDCoCDCoCDCoCDCoCDCoCDCoCA1NTAuMzIwNDk2IMKgIMKgIMKg
MTY2MyDCoCDCoCDCoCDCoCDCoCDCoDE0MDMgwqAgwqA1ODEKPj4gMyDCoCDCoCDCoCDCoCDCoCDC
oCA2OTcuNTUyOTE3IMKgIMKgIMKgMjU1IMKgIMKgIMKgIMKgIMKgIMKgIDIxOCDCoCDCoCA3MjUK
Pj4gNCDCoCDCoCDCoCDCoCDCoCDCoCAxMDI4Ljc1NjcxNCDCoCDCoCAzNyDCoCDCoCDCoCDCoCDC
oCDCoCDCoDMwIMKgIMKgIMKgIMKgIMKgIMKgIMKgMTAxMwo+PiA1IMKgIMKgIMKgIMKgIMKgIMKg
IDE2MDMuNDI4NTg5IMKgIMKgIDcgwqAgwqAgwqAgwqAgwqAgwqAgwqAgNyDCoCDCoCDCoCDCoCDC
oCDCoCDCoCAxNTg5Cj4+IE92ZXJhbGwgYXZlcmFnZTogNDk0LjUxNDYxOAo+Pgo+PiBbRGlzY3Vz
c2lvbjpdIE5vdGhpbmcsIGFwcGVhcnMgbm9ybWFsLiBDb250ZW50aW9uIHdpbmRvdyBhcHBlYXJz
IHRvCj4+IGRvdWJsZSBub3JtYWxseS4KPj4KPj4gPT0+IGlwZXJmXzMzTV9yYXRlXzM2TV9hdHRf
MTlkQi5wY2FwLnR4dCA8PT0gKGxvc3N5IGxpbmssIGJ1dCBzdGlsbAo+PiAxMDAlIGRlbGl2ZXJ5
KQo+PiBBdmVyYWdlIHRpbWUgcGVyIFRYIE5vOgo+PiBUWE5vIMKgQXZnIMKgIMKgIMKgIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIE5vIMKgIMKgIMKgIMKgIMKgIMKgIMKgRmluYWwgwqAgwqAgwqAgwqAg
wqAgRXhwZWN0ZWRBdmcKPj4gMSDCoCDCoCDCoCDCoCDCoCDCoCA0NzcuNTEwNDM3IMKgIMKgIMKg
MTQ4OTMgwqAgODY1MyDCoCDCoDUwOQo+PiAyIMKgIMKgIMKgIMKgIMKgIMKgIDU0Ni4xNDkwNDgg
wqAgwqAgwqA2MjA1IMKgIMKgIMKgIMKgIMKgIMKgMzYyNCDCoCDCoDU4MQo+PiAzIMKgIMKgIMKg
IMKgIMKgIMKgIDY5Mi4yNzAyMDMgwqAgwqAgwqAyNTYxIMKgIMKgIMKgIMKgIMKgIMKgMTU1MiDC
oCDCoDcyNQo+PiA0IMKgIMKgIMKgIMKgIMKgIMKgIDk4MC41NjU4NTcgwqAgwqAgwqAxMDAyIMKg
IMKgIMKgIMKgIMKgIMKgNTk2IMKgIMKgIDEwMTMKPj4gNSDCoCDCoCDCoCDCoCDCoCDCoCAxNTQy
LjA3OTk1NiDCoCDCoCA0MDAgwqAgwqAgwqAgwqAgwqAgwqAgMjUyIMKgIMKgIDE1ODkKPj4gNiDC
oCDCoCDCoCDCoCDCoCDCoCAyNzU4LjY5Mzg0OCDCoCDCoCAxNDcgwqAgwqAgwqAgwqAgwqAgwqAg
ODkgwqAgwqAgwqAgwqAgwqAgwqAgwqAyNzQxCj4+IDcgwqAgwqAgwqAgwqAgwqAgwqAgNDk3MS41
MDAwMDAgwqAgwqAgNTYgwqAgwqAgwqAgwqAgwqAgwqAgwqAzMiDCoCDCoCDCoCDCoCDCoCDCoCDC
oDUwNDUKPj4gOCDCoCDCoCDCoCDCoCDCoCDCoCA0Njg5LjA0MzQ1NyDCoCDCoCAyMyDCoCDCoCDC
oCDCoCDCoCDCoCDCoDE1IMKgIMKgIMKgIMKgIMKgIMKgIMKgNTA0NQo+PiA5IMKgIMKgIMKgIMKg
IMKgIMKgIDQ0ODcuODU2OTM0IMKgIMKgIDcgwqAgwqAgwqAgwqAgwqAgwqAgwqAgMyDCoCDCoCDC
oCDCoCDCoCDCoCDCoCA1MDQ1Cj4+IDEwIMKgIMKgIMKgIMKgIMKgIMKgNDQyLjI1MDAwMCDCoCDC
oCDCoDQgwqAgwqAgwqAgwqAgwqAgwqAgwqAgMyDCoCDCoCDCoCDCoCDCoCDCoCDCoCA1MDQ1Cj4+
IDExIMKgIMKgIMKgIMKgIMKgIMKgNDg4LjAwMDAwMCDCoCDCoCDCoDEgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgMSDCoCDCoCDCoCDCoCDCoCDCoCDCoCA1MDQ1Cj4+IE92ZXJhbGwgYXZlcmFnZTogNTgw
Ljk3NjgwNwo+Pgo+PiBbRGlzY3Vzc2lvbjpdIENvbnRlbnRpb24gd2luZG93IGFwcGVhcnMgdG8g
ZG91YmxlIHVudGlsIGEgcGxhdGVhdSBmcm9tCj4+IDcgdGhyb3VnaCA5LiBXZWlyZGx5LCB0aGUg
Y29udGVudGlvbiB3aW5kb3cgYXBwZWFycyB0byBiZSBkcm9wIGFnYWluCj4+IGZyb20gMTAsIGJ1
dAo+PiB0aGVyZSBhcmUgdG9vIGZldyBmcmFtZXMgdG8gZHJhdyBhIGNvbmNsdXNpb24uCj4+Cj4+
ID09PiBpcGVyZl8zM01fcmF0ZV8zNk1fYXR0XzIxZEIucGNhcC50eHQgPD09IChsb3NzeSBsaW5r
LCA8IDElIGRlbGl2ZXJ5KQo+PiBUWE5vIMKgQXZnIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIE5vIMKgIMKgIMKgIMKgIMKgIMKgIMKgRmluYWwgwqAgRXhwZWN0ZWRBdmcKPj4gMSDCoCDC
oCDCoCDCoCDCoCDCoCA0ODUuMzkwMTk4IMKgIMKgIMKgMTk0MCDCoCDCoCDCoCDCoCDCoCDCoDMg
wqAgwqAgwqAgwqAgwqA1MDkKPj4gMiDCoCDCoCDCoCDCoCDCoCDCoCA0NzkuMTEzNDM0IMKgIMKg
IMKgMTkyMiDCoCDCoCDCoCDCoCDCoCDCoDIgwqAgwqAgwqAgwqAgwqA1ODEKPj4gMyDCoCDCoCDC
oCDCoCDCoCDCoCA0NzkuNjgxODI0IMKgIMKgIMKgMTkxNCDCoCDCoCDCoCDCoCDCoCDCoDAgwqAg
wqAgwqAgwqAgwqA3MjUKPj4gNCDCoCDCoCDCoCDCoCDCoCDCoCA0ODUuMDgzMDM4IMKgIMKgIMKg
MTkwMyDCoCDCoCDCoCDCoCDCoCDCoDEgwqAgwqAgwqAgwqAgwqAxMDEzCj4+IDUgwqAgwqAgwqAg
wqAgwqAgwqAgNDkyLjA4ODEzNSDCoCDCoCDCoDE4OTUgwqAgwqAgwqAgwqAgwqAgwqA0IMKgIMKg
IMKgIMKgIMKgMTU4OQo+PiA2IMKgIMKgIMKgIMKgIMKgIMKgIDUwOC4zMjI1MTAgwqAgwqAgwqAx
ODc2IMKgIMKgIMKgIMKgIMKgIMKgMSDCoCDCoCDCoCDCoCDCoDI3NDEKPj4gNyDCoCDCoCDCoCDC
oCDCoCDCoCA1MjQuNjk3ODc2IMKgIMKgIMKgMTg3MCDCoCDCoCDCoCDCoCDCoCDCoDEgwqAgwqAg
wqAgwqAgwqA1MDQ1Cj4+IDggwqAgwqAgwqAgwqAgwqAgwqAgNTQzLjA1NDM4MiDCoCDCoCDCoDE4
NTcgwqAgwqAgwqAgwqAgwqAgwqAwIMKgIMKgIMKgIMKgIMKgNTA0NQo+PiA5IMKgIMKgIMKgIMKg
IMKgIMKgIDUyMi45NzA3MDMgwqAgwqAgwqAxODQyIMKgIMKgIMKgIMKgIMKgIMKgMCDCoCDCoCDC
oCDCoCDCoDUwNDUKPj4gMTAgwqAgwqAgwqAgwqAgwqAgwqA0NzguMjA0MTMyIMKgIMKgIMKgMTgz
NyDCoCDCoCDCoCDCoCDCoCDCoDAgwqAgwqAgwqAgwqAgwqA1MDQ1Cj4+IDExIMKgIMKgIMKgIMKg
IMKgIMKgNDc2LjUyMDc4MiDCoCDCoCDCoDE4MjggwqAgwqAgwqAgwqAgwqAgwqAwIMKgIMKgIMKg
IMKgIMKgNTA0NQo+PiAxMiDCoCDCoCDCoCDCoCDCoCDCoDQ3Ny41MzEzNDIgwqAgwqAgwqAxODE4
IMKgIMKgIMKgIMKgIMKgIMKgMCDCoCDCoCDCoCDCoCDCoDUwNDUKPj4gMTMgwqAgwqAgwqAgwqAg
wqAgwqA0NzYuNzQzNjUyIMKgIMKgIMKgMTgxMCDCoCDCoCDCoCDCoCDCoCDCoDAgwqAgwqAgwqAg
wqAgwqA1MDQ1Cj4+IDE0IMKgIMKgIMKgIMKgIMKgIMKgNDc4LjkzNjU1NCDCoCDCoCDCoDE3OTcg
wqAgwqAgwqAgwqAgwqAgwqAwIMKgIMKgIMKgIMKgIMKgNTA0NQo+PiAxNSDCoCDCoCDCoCDCoCDC
oCDCoDQ4MC42OTkwOTcgwqAgwqAgwqAxNzg4IMKgIMKgIMKgIMKgIMKgIMKgMCDCoCDCoCDCoCDC
oCDCoDUwNDUKPj4gMTYgwqAgwqAgwqAgwqAgwqAgwqA0ODIuNzM0MzE0IMKgIMKgIMKgMTc4NCDC
oCDCoCDCoCDCoCDCoCDCoDAgwqAgwqAgwqAgwqAgwqA1MDQ1Cj4+IDE3IMKgIMKgIMKgIMKgIMKg
IMKgNDkxLjYwODQ1OSDCoCDCoCDCoDE3NzUgwqAgwqAgwqAgwqAgwqAgwqAwIMKgIMKgIMKgIMKg
IMKgNTA0NQo+PiAxOCDCoCDCoCDCoCDCoCDCoCDCoDQ5Ny40NTg5ODQgwqAgwqAgwqAxNzY3IMKg
IMKgIMKgIMKgIMKgIMKgMSDCoCDCoCDCoCDCoCDCoDUwNDUKPj4gMTkgwqAgwqAgwqAgwqAgwqAg
wqA0OTUuMDY3OTMyIMKgIMKgIMKgMTc1MiDCoCDCoCDCoCDCoCDCoCDCoDcgwqAgwqAgwqAgwqAg
wqA1MDQ1Cj4+IDIwIMKgIMKgIMKgIMKgIMKgIMKgNDc4LjEwMjQxNyDCoCDCoCDCoDE3MzggwqAg
wqAgwqAgwqAgwqAgwqAyOTUgwqAgwqAgNTA0NQo+PiAyMSDCoCDCoCDCoCDCoCDCoCDCoDQ3NS4x
Mjg4NDUgwqAgwqAgwqAxNDM2IMKgIMKgIMKgIMKgIMKgIMKgMTQwMiDCoCA1MDQ1Cj4+IDIyIMKg
IMKgIMKgIMKgIMKgIMKgNDkyLjY5MjMyMiDCoCDCoCDCoDI2IMKgIMKgIMKgIMKgIMKgIMKgIMKg
MCDCoCDCoCDCoCDCoCDCoDUwNDUKPj4gMjMgwqAgwqAgwqAgwqAgwqAgwqA0NzEuNTc2OTM1IMKg
IMKgIMKgMjYgwqAgwqAgwqAgwqAgwqAgwqAgwqAwIMKgIMKgIMKgIMKgIMKgNTA0NQo+PiAyNCDC
oCDCoCDCoCDCoCDCoCDCoDQ2Ni44ODQ2MTMgwqAgwqAgwqAyNiDCoCDCoCDCoCDCoCDCoCDCoCDC
oDAgwqAgwqAgwqAgwqAgwqA1MDQ1Cj4+IDI1IMKgIMKgIMKgIMKgIMKgIMKgNDc2LjI2OTIyNiDC
oCDCoCDCoDI2IMKgIMKgIMKgIMKgIMKgIMKgIMKgMCDCoCDCoCDCoCDCoCDCoDUwNDUKPj4gMjYg
wqAgwqAgwqAgwqAgwqAgwqA0NjIuMTkyMzIyIMKgIMKgIMKgMjYgwqAgwqAgwqAgwqAgwqAgwqAg
wqAwIMKgIMKgIMKgIMKgIMKgNTA0NQo+PiAyNyDCoCDCoCDCoCDCoCDCoCDCoDQ4MC45NjE1NDgg
wqAgwqAgwqAyNiDCoCDCoCDCoCDCoCDCoCDCoCDCoDEgwqAgwqAgwqAgwqAgwqA1MDQ1Cj4+IDI4
IMKgIMKgIMKgIMKgIMKgIMKgNDYzLjYwMDAwNiDCoCDCoCDCoDI1IMKgIMKgIMKgIMKgIMKgIMKg
IMKgMjQgwqAgwqAgwqAgwqAgNTA0NQo+PiBPdmVyYWxsIGF2ZXJhZ2U6IDQ5MS4wNjgzNTkKPj4K
Pj4gW0Rpc2N1c3Npb246XSBDb250ZW50aW9uIGRvZXMgbm90IGFwcGVhciB0byBpbmNyZWFzZSwg
YW5kIHRoZSBudW1iZXIKPj4gb2YgdHJhbnNtaXNzaW9uIHBlciBmcmFtZSBpcyB2ZXJ5IGxhcmdl
LiBUaGlzIGJlaGF2aW91ciBpcyByZXBsaWNhdGVkCj4+IHdpdGggdGhlIDU0TSBzY2VuYXJpbyB3
aGVuIGEgbGluayBpcyBleHRyZW1lbHkgbG9zc3kuCj4+Cj4+ID09PiBpcGVyZl8zM01fcmF0ZV81
NE1fYXR0XzFkQi5wY2FwLnR4dCA8PT0gKGdvb2QgbGluaywgMjQwMCBwYWNrZXRzL3MpCj4+IEF2
ZXJhZ2UgdGltZSBwZXIgVFggTm86Cj4+IFRYTm8gwqBBdmcgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgTm8gwqAgwqAgwqAgwqAgwqAgwqAgwqBGaW5hbCDCoCDCoCDCoCDCoCDCoCBFeHBl
Y3RlZEF2ZXJhZ2UKPj4gMSDCoCDCoCDCoCDCoCDCoCDCoCAzNjUuNTUxODQ5IMKgIMKgIMKgMjM5
NTcgwqAgMjM5MzUgwqAgwqAgwqAgwqAgwqAgMzkzCj4+IDIgwqAgwqAgwqAgwqAgwqAgwqAgNDA5
LjU3MTQ0MiDCoCDCoCDCoDIxIMKgIMKgIMKgIMKgIMKgIMKgIMKgMjEgwqAgwqAgwqAgwqAgwqAg
wqAgwqA0NjUKPj4gT3ZlcmFsbCBhdmVyYWdlOiAzNjUuNTkwNDI0Cj4+Cj4+IFtEaXNjdXNzaW9u
OiBdIEFwcGVhcnMgcmVsYXRpdmVseSBub3JtYWwuCj4+Cj4+ID09PiBpcGVyZl8zM01fcmF0ZV81
NE1fYXR0XzEwZEIucGNhcC50eHQgPD09IChsb3NzeSBsaW5rLCBidXQgc3RpbGwKPj4gMTAwJSBk
ZWxpdmVyeSwgMTUwMCBwYWNrZXRzL3MpCj4+IEF2ZXJhZ2UgdGltZSBwZXIgVFggTm86Cj4+IFRY
Tm8gwqBBdmcgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgTm8gwqAgwqAgwqAgwqAgwqAg
wqAgwqBGaW5hbCDCoCDCoCDCoCDCoCDCoCBFeHBlY3RlZEF2ZXJhZ2UKPj4gMSDCoCDCoCDCoCDC
oCDCoCDCoCAzNjQuNTAxMTkwIMKgIMKgIMKgMTAxMzQgwqAgNTkxNSDCoCDCoDM5Mwo+PiAyIMKg
IMKgIMKgIMKgIMKgIMKgIDQzNC4xMzgwMDAgwqAgwqAgwqA0MTk2IMKgIMKgIMKgIMKgIMKgIMKg
MjQ2MSDCoCDCoDQ2NQo+PiAzIMKgIMKgIMKgIMKgIMKgIMKgIDU3OS40ODIzMDAgwqAgwqAgwqAx
NzIxIMKgIMKgIMKgIMKgIMKgIMKgMTAzNiDCoCDCoDYwOQo+PiA0IMKgIMKgIMKgIMKgIMKgIMKg
IDgzNy4wMDU4NTkgwqAgwqAgwqA2ODIgwqAgwqAgwqAgwqAgwqAgwqAgMzk3IMKgIMKgIDg5Nwo+
PiA1IMKgIMKgIMKgIMKgIMKgIMKgIDEzNjUuMjc5MTc1IMKgIMKgIDI4MyDCoCDCoCDCoCDCoCDC
oCDCoCAxNTUgwqAgwqAgMTQ3Mwo+PiA2IMKgIMKgIMKgIMKgIMKgIMKgIDI1NzIuMDA3ODEyIMKg
IMKgIDEyOCDCoCDCoCDCoCDCoCDCoCDCoCA4MSDCoCDCoCDCoCDCoCDCoCDCoCDCoDI2MjUKPj4g
NyDCoCDCoCDCoCDCoCDCoCDCoCA0OTA1LjE5NTgwMSDCoCDCoCA0NiDCoCDCoCDCoCDCoCDCoCDC
oCDCoDI3IMKgIMKgIMKgIMKgIMKgIMKgIMKgNDkyOQo+PiA4IMKgIMKgIMKgIMKgIMKgIMKgIDQ5
ODUuOTQ3MjY2IMKgIMKgIDE5IMKgIMKgIMKgIMKgIMKgIMKgIMKgMTIgwqAgwqAgwqAgwqAgwqAg
wqAgwqA0OTI5Cj4+IDkgwqAgwqAgwqAgwqAgwqAgwqAgNDYyNy4yODU2NDUgwqAgwqAgNyDCoCDC
oCDCoCDCoCDCoCDCoCDCoCA0IMKgIMKgIMKgIMKgIMKgIMKgIMKgIDQ5MjkKPj4gMTAgwqAgwqAg
wqAgwqAgwqAgwqAzNjYuMDAwMDAwIMKgIMKgIMKgMyDCoCDCoCDCoCDCoCDCoCDCoCDCoCAxIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIDQ5MjkKPj4gMTEgwqAgwqAgwqAgwqAgwqAgwqAzMzUuNTAwMDAw
IMKgIMKgIMKgMiDCoCDCoCDCoCDCoCDCoCDCoCDCoCAyIMKgIMKgIMKgIMKgIMKgIMKgIMKgIDQ5
MjkKPj4gT3ZlcmFsbCBhdmVyYWdlOiA0NzMuNDc3MDIwCj4+Cj4+IFtEaXNjdXNzaW9uOiBdIEFw
cGVhcnMgZmluZSwgdW50aWwgdHJhbnNtaXNzaW9uIDEwLCB3aGljaCBhcHBlYXJzIHRvCj4+IGRy
b3AgdGhlIGNvbnRlbnRpb24gd2luZG93IGJhY2sgdG8gYW4gZXF1aXZhbGVudCBmaXJzdCB0cmFu
c21pc3Npb24KPj4gdmFsdWUsIGJ1dCBub3QgZW5vdWdoIGZyYW1lcyBhdCB0aGlzIHBvaW50IHRv
IGRyYXcgYSBjb25jbHVzaW9uLgo+Pgo+PiA9PT4gaXBlcmZfMzNNX3JhdGVfNTRNX2F0dF8xMWRC
LnBjYXAudHh0IDw9PSAobG9zc3kgbGluaywgYnV0IHN0aWxsCj4+IDEwMCUgZGVsaXZlcnksIDY4
MCBwYWNrZXRzL3MpCj4+IEF2ZXJhZ2UgdGltZSBwZXIgVFggTm86Cj4+IFRYTm8gwqBBdmcgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgTm8gwqAgwqAgwqAgwqAgwqAgwqAgwqBGaW5hbCDC
oCDCoCDCoCDCoCDCoCBFeHBlY3RlZEF2ZXJhZ2UKPj4gMSDCoCDCoCDCoCDCoCDCoCDCoCAzNjIu
MDgyODI1IMKgIMKgIMKgMjE0OSDCoCDCoCDCoCDCoCDCoCDCoDUzOSDCoCDCoCAzOTMKPj4gMiDC
oCDCoCDCoCDCoCDCoCDCoCA0MzQuNjcyNDg1IMKgIMKgIMKgMTYwNiDCoCDCoCDCoCDCoCDCoCDC
oDM2OCDCoCDCoCA0NjUKPj4gMyDCoCDCoCDCoCDCoCDCoCDCoCA1ODIuNzk1Mjg4IMKgIMKgIMKg
MTIzMSDCoCDCoCDCoCDCoCDCoCDCoDMwNyDCoCDCoCA2MDkKPj4gNCDCoCDCoCDCoCDCoCDCoCDC
oCA4MjAuMzQ3MTA3IMKgIMKgIMKgOTE5IMKgIMKgIMKgIMKgIMKgIMKgIDIzNyDCoCDCoCA4OTcK
Pj4gNSDCoCDCoCDCoCDCoCDCoCDCoCAxNDI0Ljc1MzI5NiDCoCDCoCA2NzMgwqAgwqAgwqAgwqAg
wqAgwqAgMTk0IMKgIMKgIDE0NzMKPj4gNiDCoCDCoCDCoCDCoCDCoCDCoCAyNjI2LjQwMzMyMCDC
oCDCoCA0NjYgwqAgwqAgwqAgwqAgwqAgwqAgMTQzIMKgIMKgIDI2MjUKPj4gNyDCoCDCoCDCoCDC
oCDCoCDCoCA0NzM0LjIzMzg4NyDCoCDCoCAzMDggwqAgwqAgwqAgwqAgwqAgwqAgODMgwqAgwqAg
wqAgwqAgwqAgwqAgwqA0OTI5Cj4+IDggwqAgwqAgwqAgwqAgwqAgwqAgNDgzMC4yNDQxNDEgwqAg
wqAgMjE3IMKgIMKgIMKgIMKgIMKgIMKgIDY1IMKgIMKgIMKgIMKgIMKgIMKgIMKgNDkyOQo+PiA5
IMKgIMKgIMKgIMKgIMKgIMKgIDQ0NDkuNzAyNjM3IMKgIMKgIDE0OCDCoCDCoCDCoCDCoCDCoCDC
oCAzMyDCoCDCoCDCoCDCoCDCoCDCoCDCoDQ5MjkKPj4gMTAgwqAgwqAgwqAgwqAgwqAgwqAzNjAu
MTE0MDQ0IMKgIMKgIMKgMTE0IMKgIMKgIMKgIMKgIMKgIMKgIDM2IMKgIMKgIMKgIMKgIMKgIMKg
IMKgNDkyOQo+PiAxMSDCoCDCoCDCoCDCoCDCoCDCoDM2Ni4wMDAwMDAgwqAgwqAgwqA3OCDCoCDC
oCDCoCDCoCDCoCDCoCDCoDIwIMKgIMKgIMKgIMKgIMKgIMKgIMKgNDkyOQo+PiAxMiDCoCDCoCDC
oCDCoCDCoCDCoDQ2MC42NTUxODIgwqAgwqAgwqA1OCDCoCDCoCDCoCDCoCDCoCDCoCDCoDIwIMKg
IMKgIMKgIMKgIMKgIMKgIMKgNDkyOQo+PiAxMyDCoCDCoCDCoCDCoCDCoCDCoDU0NC4xODQyMDQg
wqAgwqAgwqAzOCDCoCDCoCDCoCDCoCDCoCDCoCDCoDkgwqAgwqAgwqAgwqAgwqAgwqAgwqAgNDky
OQo+PiAxNCDCoCDCoCDCoCDCoCDCoCDCoDg5My45NjU1MTUgwqAgwqAgwqAyOSDCoCDCoCDCoCDC
oCDCoCDCoCDCoDcgwqAgwqAgwqAgwqAgwqAgwqAgwqAgNDkyOQo+PiAxNSDCoCDCoCDCoCDCoCDC
oCDCoDEzNjEuNDA5MDU4IMKgIMKgIDIyIMKgIMKgIMKgIMKgIMKgIMKgIMKgOCDCoCDCoCDCoCDC
oCDCoCDCoCDCoCA0OTI5Cj4+IDE2IMKgIMKgIMKgIMKgIMKgIMKgMjY3NS4yODU2NDUgwqAgwqAg
MTQgwqAgwqAgwqAgwqAgwqAgwqAgwqAyIMKgIMKgIMKgIMKgIMKgIMKgIMKgIDQ5MjkKPj4gMTcg
wqAgwqAgwqAgwqAgwqAgwqA0MjM5LjUwMDAwMCDCoCDCoCAxMiDCoCDCoCDCoCDCoCDCoCDCoCDC
oDUgwqAgwqAgwqAgwqAgwqAgwqAgwqAgNDkyOQo+PiAxOCDCoCDCoCDCoCDCoCDCoCDCoDMxOTgu
MTQyODIyIMKgIMKgIDcgwqAgwqAgwqAgwqAgwqAgwqAgwqAgMiDCoCDCoCDCoCDCoCDCoCDCoCDC
oCA0OTI5Cj4+IDE5IMKgIMKgIMKgIMKgIMKgIMKgNTExMS43OTk4MDUgwqAgwqAgNSDCoCDCoCDC
oCDCoCDCoCDCoCDCoCAzIMKgIMKgIMKgIMKgIMKgIMKgIMKgIDQ5MjkKPj4gMjAgwqAgwqAgwqAg
wqAgwqAgwqAxNDAzLjAwMDAwMCDCoCDCoCAyIMKgIMKgIMKgIMKgIMKgIMKgIMKgIDEgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgNDkyOQo+PiBPdmVyYWxsIGF2ZXJhZ2U6IDEwNjMuMTI5ODgzCj4+Cj4+
IFtEaXNjdXNzaW9uOiBdIEV2ZXJ5dGhpbmcgYXBwZWFycyBmaW5lIHVudGlsLCBvbmNlIGFnYWlu
LCB0cmFuc21pc3Npb24KPj4gMTAsIHdoZW4gdGhlIGNvbnRlbnRpb24gd2luZG93cyBhcHBlYXJz
IHRvICdyZXN0YXJ0JyAtIGl0IGNsaW1icwo+PiBzdGVhZGlseSB1bnRpbCAxNy4gQWZ0ZXIgdGhp
cyBwb2ludCwgdGhlcmUgYXJlIG5vdCBlbm91Z2ggZnJhbWVzIHRvCj4+IGRyYXcgYW55IGNvbmNs
dXNpb25zLgo+Pgo+PiA9PT4gaXBlcmZfMzNNX3JhdGVfNTRNX2F0dF8xMmRCLnBjYXAudHh0IDw9
PSAobG9zc3kgbGluaywgNiUgZGVsaXZlcnksCj4+IDQwMCBwYWNrZXRzL3MpCj4+IEF2ZXJhZ2Ug
dGltZSBwZXIgVFggTm86Cj4+IFRYTm8gwqBBdmcgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgTm8gwqAgwqAgwqAgwqAgwqAgwqAgwqBGaW5hbCDCoCDCoCDCoCDCoCDCoCBFeHBlY3RlZEF2
Zwo+PiAxIMKgIMKgIMKgIMKgIMKgIMKgIDM2MC40NjA3MjQgwqAgwqAgwqA0NDgyIMKgIMKgIMKg
IMKgIMKgIMKgMTQgwqAgwqAgwqAgwqAgwqAgwqAgwqAzOTMKPj4gMiDCoCDCoCDCoCDCoCDCoCDC
oCAzNjYuMDY4NDgxIMKgIMKgIMKgNDQ1MyDCoCDCoCDCoCDCoCDCoCDCoDE2IMKgIMKgIMKgIMKg
IMKgIMKgIMKgNDY1Cj4+IDMgwqAgwqAgwqAgwqAgwqAgwqAgMzYwLjg3MTczNSDCoCDCoCDCoDQ0
MTMgwqAgwqAgwqAgwqAgwqAgwqAxMyDCoCDCoCDCoCDCoCDCoCDCoCDCoDYwOQo+PiA0IMKgIMKg
IMKgIMKgIMKgIMKgIDM2MS41MzU1NTMgwqAgwqAgwqA0Mzg2IMKgIMKgIMKgIMKgIMKgIMKgMTgg
wqAgwqAgwqAgwqAgwqAgwqAgwqA4OTcKPj4gNSDCoCDCoCDCoCDCoCDCoCDCoCAzNjcuNTI2MDYy
IMKgIMKgIMKgNDM1NyDCoCDCoCDCoCDCoCDCoCDCoDYwIMKgIMKgIMKgIMKgIMKgIMKgIMKgMTQ3
Mwo+PiA2IMKgIMKgIMKgIMKgIMKgIMKgIDM2MC4wMDM5NjcgwqAgwqAgwqA0MjgzIMKgIMKgIMKg
IMKgIMKgIMKgMzgzOSDCoCDCoDI2MjUKPj4gNyDCoCDCoCDCoCDCoCDCoCDCoCAzNjEuNzc4MDQ2
IMKgIMKgIMKgNDE5IMKgIMKgIMKgIMKgIMKgIMKgIDQxNiDCoCDCoCA0OTI5Cj4+IE92ZXJhbGwg
YXZlcmFnZTogMzYyLjczMjkxMAo+Pgo+PiBbRGlzY3Vzc2lvbjpdIFRoaXMgZXhoaWJpdHMgdGhl
IHNhbWUgcHJvYmxlbSBhcyB0aGUgZXh0cmVtZWx5IGxvc3N5Cj4+IDM2TSBsaW5rIC0gdGhlIGNv
bnRlbnRpb24gd2luZG93IGRvZXMgbm90IGFwcGVhciB0byByaXNlLiBFdmVuIHdpdGgKPj4gZW5v
dWdoIGZyYW1lcyB0byBkcmF3IGEgZ29vZCBjb25jbHVzaW9uIGF0IHRyYW5zbWlzc2lvbiA2LCB0
aGUKPj4gdHJhbnNtaXNzaW9uIHRpbWUgYXZlcmFnZSAoMzYwKSBpcyB3YXkgYmVsb3cgdGhlIGV4
cGVjdGVkIGF2ZXJhZ2UKPj4gKDI2MjUpLgo+PiA9PT4gRU5EIE9GIE9VVFBVVCA8PT0KPj4KPj4g
VGhlIHF1ZXN0aW9uIGhlcmUgaXM6IHdoeSBkb2VzIGF0aDVrL21hYzgwMjExIHNlbmQgb3V0IHNv
IG1hbnkKPj4gdHJhbnNtaXNzaW9ucywgYW5kIHdoeSBkb2VzIGl0IHZhcnkgc28gbXVjaCBiYXNl
ZCBvbiBsaW5rIHF1YWxpdHk/Cj4+IEFkZGl0aW9uYWxseSwgd2h5IGRvZXMgaXQgYXBwZWFyIHRv
ICdyZXNldCcgdGhlIGNvbnRlbnRpb24gd2luZG93Cj4+IGFmdGVyIDkgcmV0cmFuc21pc3Npb25z
IG9mIGEgZnJhbWU/Cj4+Cj4+IENoZWVycywKPj4KPj4gSm9uYXRoYW4KPgo+IEhpIEpvbmF0aGFu
IQo+Cj4gVGhpcyBpcyBhIHZlcnkgaW50ZXJlc3Rpbmcgc2V0dXAgYW5kIHRlc3QuIEkgZ3Vlc3Mg
bm9ib2R5IGhhcyBsb29rZWQgc28KPiBjbG9zZWx5IHlldC4uLiBJIHRoaW5rIHRoaXMgaXMgbm90
IG5lY2Vzc2FyaWx5IGF0aDVrIHJlbGF0ZWQsIGJ1dCBtYXkgYmUgYSBidWcKPiBvZiBtYWM4MDIx
MSBvciBtaW5zdHJlbCwgYnV0IG5vdCBzdXJlIHlldCwgb2YgY291cnNlLi4uCj4KPiBJdCdzIG5v
cm1hbCwgdGhhdCB0aGUgQ1cgaXMgcmVzZXQgYWZ0ZXIgdGhlIHJldHJ5IGxpbWl0cyBhcmUgcmVh
Y2hlZCwgdGhpcyBpcwo+IHdoYXQgdGhlIHN0YW5kYXJkIHNheXM6Cj4KPiAiVGhlIENXIHNoYWxs
IGJlIHJlc2V0IHRvIGFDV21pbiBhZnRlciBldmVyeSBzdWNjZXNzZnVsIGF0dGVtcHQgdG8gdHJh
bnNtaXQgYW4KPiBNUERVIG9yIE1NUERVLCB3aGVuIFNMUkMgcmVhY2hlcyBkb3QxMUxvbmdSZXRy
eUxpbWl0LCBvciB3aGVuIFNTUkMgcmVhY2hlcwo+IGRvdDExU2hvcnRSZXRyeUxpbWl0LiIgKDgw
Mi4xMS0yMDA3IHAyNjEpCj4KPiBCdXQgaXQgc2VlbXMgd2VpcmQgdGhhdCB0aGVyZSBhcmUgc28g
bWFueSByZXRyYW5zbWlzc2lvbnMuIFRoZSBkZWZhdWx0IG1heGltdW0KPiBudW1iZXJzIG9mIHJl
dHJhbnNtaXNzaW9ucyBzaG91bGQgYmUgNyBmb3Igc2hvcnQgZnJhbWVzIGFuZCA0IGZvciBsb25n
IGZyYW1lcwo+IChkb3QxMVtTaG9ydHxMb25nXVJldHJ5TGltaXQpLCBhbmQgdGhpcyBpcyB3aGF0
IGlzIHNldCBhcyBkZWZhdWx0cyBpbiBtYWM4MDIxMQo+IChsb2NhbC0+aHcuY29uZi5zaG9ydF9m
cmFtZV9tYXhfdHhfY291bnQpLiBTZWVtcyB3ZSBhcmUgZ2V0dGluZyBtYW55Cj4gcmV0cmFuc21p
c3Npb25zIGZyb20gbWluc3RlbCwgaSBhZGRlZCBzb21lIGRlYnVnIHByaW50czoKPgoKV2hlbiBh
dGg1ayBkb2Vzbid0IGdldCByZXRyeSBsaW1pdHMgZnJvbSBhYm92ZSBpdCB1c2VzIHRoZSBmb2xs
b3dpbmcKZGVmYXVsdHMgb24gZGN1LgpGb3Igbm93IGkgZG9uJ3QgdGhpbmsgd2UgdXNlIGxvY2Fs
LT5ody5jb25mLnNob3J0X2ZyYW1lX21heF90eF9jb3VudApmb3IgdGhhdCBzbyB0aGUKZGVmYXVs
dCBpcyBhaF9saW1pdF90eF9yZXRyaWVzIChBUjVLX0lOSVRfVFhfUkVUUlkpIGJ1dCBzZWVtcyBp
dCdzCndyb25nIGFuZCB3ZSBzaG91bGQKZml4IGl0Li4uCgovKiBUeCByZXRyeSBsaW1pdHMgKi8K
I2RlZmluZSBBUjVLX0lOSVRfU0hfUkVUUlkgICAgICAgICAgICAgICAgICAgICAgMTAKI2RlZmlu
ZSBBUjVLX0lOSVRfTEdfUkVUUlkgICAgICAgICAgICAgICAgICAgICAgQVI1S19JTklUX1NIX1JF
VFJZCi8qIEZvciBzdGF0aW9uIG1vZGUgKi8KI2RlZmluZSBBUjVLX0lOSVRfU1NIX1JFVFJZICAg
ICAgICAgICAgICAgICAgICAgMzIKI2RlZmluZSBBUjVLX0lOSVRfU0xHX1JFVFJZICAgICAgICAg
ICAgICAgICAgICAgQVI1S19JTklUX1NTSF9SRVRSWQojZGVmaW5lIEFSNUtfSU5JVF9UWF9SRVRS
WSAgICAgICAgICAgICAgICAgICAgICAxMAoKPiAqKiogdHhkZXNjIHRyaWVzIDMKPiAqKiogbXJy
IDAgdHJpZXMgMyByYXRlIDExCj4gKioqIG1yciAxIHRyaWVzIDMgcmF0ZSAxMQo+ICoqKiBtcnIg
MiB0cmllcyAzIHJhdGUgMTEKPgo+IFRoaXMgc2VlbXMgdG8gYmUgdGhlIG5vcm1hbCBjYXNlIGFu
ZCB0aGF0IHdvdWxkIGFscmVhZHkgcmVzdWx0IGluIDEyCj4gdHJhbnNtaXNzaW9ucy4KPgo+IEFu
b3RoZXIgdGhpbmcgdGhhdCBzdHJpa2VzIG1lIGhlcmUgaXM6IHdoeSB1c2UgbXVsdGkgcmF0ZSBy
ZXRyaWVzIGlmIHRoZSByYXRlCj4gaXMgYWxsIHRoZSBzYW1lPyAoSWdub3JlIHRoZSBhY3R1YWwg
dmFsdWUgb2YgdGhlIHJhdGUsIHRoaXMgaXMgdGhlIEhXIHJhdGUKPiBjb2RlKS4KPgo+IE90aGVy
IGV4YW1wbGVzOgo+Cj4gKioqIHR4ZGVzYyB0cmllcyAyCj4gKioqIG1yciAwIHRyaWVzIDkgcmF0
ZSAxMgo+ICoqKiBtcnIgMSB0cmllcyAyIHJhdGUgMTMKPiAqKiogbXJyIDIgdHJpZXMgMyByYXRl
IDExCj4KPiA9IDE2IHRyYW5zbWlzc2lvbnMgaW4gc3VtLgo+Cj4gKioqIHR4ZGVzYyB0cmllcyA5
Cj4gKioqIG1yciAwIHRyaWVzIDMgcmF0ZSAxMQo+ICoqKiBtcnIgMSB0cmllcyA5IHJhdGUgOAo+
ICoqKiBtcnIgMiB0cmllcyAzIHJhdGUgMTEKPgo+ID0gMjQgdHJhbnNtaXNzaW9ucyBpbiBzdW0u
IEFnYWluLCByYXRlWzFdIGFuZCByYXRlWzNdIGFyZSB0aGUgc2FtZSwgc28gd2h5Cj4gYm90aGVy
IHNldHRpbmcgaXQgdXAgdHdpY2U/Cj4KPiBicnVubwo+IF9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fCj4gYXRoNWstZGV2ZWwgbWFpbGluZyBsaXN0Cj4gYXRo
NWstZGV2ZWxAbGlzdHMuYXRoNWsub3JnCj4gaHR0cHM6Ly9saXN0cy5hdGg1ay5vcmcvbWFpbG1h
bi9saXN0aW5mby9hdGg1ay1kZXZlbAo+CgpBbHNvIG9uIGJhc2UuYwoKMjQwOCAgICAgICAgIC8q
IHNldCB1cCBtdWx0aS1yYXRlIHJldHJ5IGNhcGFiaWxpdGllcyAqLwoyNDA5ICAgICAgICAgaWYg
KHNjLT5haC0+YWhfdmVyc2lvbiA9PSBBUjVLX0FSNTIxMikgewoyNDEwICAgICAgICAgICAgICAg
ICBody0+bWF4X3JhdGVzID0gNDsKMjQxMSAgICAgICAgICAgICAgICAgaHctPm1heF9yYXRlX3Ry
aWVzID0gMTE7CjI0MTIgICAgICAgICB9CgoKCi0tIApHUEcgSUQ6IDB4RDIxREIyREIKQXMgeW91
IHJlYWQgdGhpcyBwb3N0IGdsb2JhbCBlbnRyb3B5IHJpc2VzLiBIYXZlIEZ1biA7LSkKTmljawo=

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-06  6:30 ath5k: Weird Retransmission Behaviour Jonathan Guerin
  2010-12-06  8:14 ` Bruno Randolf
@ 2010-12-06  9:38 ` Nick Kossifidis
  2010-12-07  1:18   ` Jonathan Guerin
  1 sibling, 1 reply; 27+ messages in thread
From: Nick Kossifidis @ 2010-12-06  9:38 UTC (permalink / raw)
  To: Jonathan Guerin; +Cc: linux-wireless, ath5k-devel, Bruno Randolf

2010/12/6 Jonathan Guerin <jonathan@guerin.id.au>:
> Hi,
>
>
> I've been doing some investigation into the behaviour of contention
> windows and retransmissions.
>
> Firstly, I'll just describe the test scenario and setup that I have. I
> have 3 Via x86 nodes with Atheros AR5001X+ cards. They are tethered to
> each other via coaxial cables, into splitters. They have 20dB of fixed
> attenuation applied to each antenna output, plus a programmable
> variable attenuator on each link. One node acts as a sender, one as a
> receiver, and one simply runs a monitor-mode interface to capture
> packet traces. All 3 are running kernel version 2.6.37-rc2. The sender
> and receiver are configured as IBSS stations and are tuned to 5.18
> GHz.
>
> Here's a really dodgy ASCII diagram of the setup:
>
> S-----[variable attenuator]-----R
> |                                         |
> |                                         |
> |                                         |
> +------------M-------------------------+
>
> where S is the Sender node, R is the Receiver node and M is the
> Monitoring capture node.
>
>
> Secondly, I have written a program which will parse a captured pcap
> file from the Monitoring station. It looks for 'chains' of frames with
> the same sequence number, and where the first frame has the Retry bit
> set to false in the header and all following have it set to true. Any
> deviation from this, and the program drops the current chain without
> including it in its stats, and looks for the next chain matching these
> requirements. It averages the amount of time per transmission number
> (i.e. the average of all transmissions which were the first, second,
> third etc. for a unique sequence number). The transmission time of a
> frame is the amount of time between the end of the frame and the end
> of the previous. It tracks these 'chains' of frames with the same
> sequence number. It considers the last transmission number in each
> chain as the 'final' transmission.
>
> Finally, the link is loaded using a saturated UDP flow, and the data
> rate is fixed to 54M and 36M. This is specified in the output. The
> output is attached below.
>
> The output describes the fixed link data rate, the variable
> attenuator's value, the delivery ratio, and the number of transmitted
> packets/s. I've added a discussion per result set. Each line outputs
> the transmission number, the average transmission time for this
> number, the total number of transmissions, the number of frames which
> ended their transmissions at this number (i.e. where the chain ended
> its final transmission - this is equivalent to the retransmission
> value from the Radiotap header + 1), and the average expected
> transmission time for all that particular transmission number in all
> chains. This is calculated using the airtime calculations from the
> 802.11a standard, with the receipt of an ACK frame, as well as a SIFS
> (16us), which is 28us. If the transmission did not receive an ACK, a
> normal ACK timeout is 50 us, but ath5k appears to have this set to 25
> us, so the value shouldn't be too far out what to expect.
>

Did you measure ACK timeout or 25 is what you get from the code ?
Because from what we know so far hw starts counting after switching to
RX mode so phy rx delay (25 from standard) is also added.



-- 
GPG ID: 0xD21DB2DB
As you read this post global entropy rises. Have Fun ;-)
Nick

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [ath5k-devel] ath5k: Weird Retransmission Behaviour
  2010-12-06  9:36   ` [ath5k-devel] " Nick Kossifidis
@ 2010-12-06 10:53     ` Sedat Dilek
  2010-12-07  2:29       ` Bruno Randolf
  2010-12-07  1:17     ` Jonathan Guerin
  1 sibling, 1 reply; 27+ messages in thread
From: Sedat Dilek @ 2010-12-06 10:53 UTC (permalink / raw)
  To: Nick Kossifidis
  Cc: Bruno Randolf, Jonathan Guerin, ath5k-devel, linux-wireless

[-- Attachment #1: Type: text/plain, Size: 17604 bytes --]

On Mon, Dec 6, 2010 at 10:36 AM, Nick Kossifidis <mickflemm@gmail.com> wrote:
> 2010/12/6 Bruno Randolf <br1@einfach.org>:
>> On Mon December 6 2010 15:30:00 Jonathan Guerin wrote:
>>> Hi,
>>>
>>>
>>> I've been doing some investigation into the behaviour of contention
>>> windows and retransmissions.
>>>
>>> Firstly, I'll just describe the test scenario and setup that I have. I
>>> have 3 Via x86 nodes with Atheros AR5001X+ cards. They are tethered to
>>> each other via coaxial cables, into splitters. They have 20dB of fixed
>>> attenuation applied to each antenna output, plus a programmable
>>> variable attenuator on each link. One node acts as a sender, one as a
>>> receiver, and one simply runs a monitor-mode interface to capture
>>> packet traces. All 3 are running kernel version 2.6.37-rc2. The sender
>>> and receiver are configured as IBSS stations and are tuned to 5.18
>>> GHz.
>>>
>>> Here's a really dodgy ASCII diagram of the setup:
>>>
>>> S-----[variable attenuator]-----R
>>>
>>>
>>>
>>> +------------M-------------------------+
>>>
>>> where S is the Sender node, R is the Receiver node and M is the
>>> Monitoring capture node.
>>>
>>>
>>> Secondly, I have written a program which will parse a captured pcap
>>> file from the Monitoring station. It looks for 'chains' of frames with
>>> the same sequence number, and where the first frame has the Retry bit
>>> set to false in the header and all following have it set to true. Any
>>> deviation from this, and the program drops the current chain without
>>> including it in its stats, and looks for the next chain matching these
>>> requirements. It averages the amount of time per transmission number
>>> (i.e. the average of all transmissions which were the first, second,
>>> third etc. for a unique sequence number). The transmission time of a
>>> frame is the amount of time between the end of the frame and the end
>>> of the previous. It tracks these 'chains' of frames with the same
>>> sequence number. It considers the last transmission number in each
>>> chain as the 'final' transmission.
>>>
>>> Finally, the link is loaded using a saturated UDP flow, and the data
>>> rate is fixed to 54M and 36M. This is specified in the output. The
>>> output is attached below.
>>>
>>> The output describes the fixed link data rate, the variable
>>> attenuator's value, the delivery ratio, and the number of transmitted
>>> packets/s. I've added a discussion per result set. Each line outputs
>>> the transmission number, the average transmission time for this
>>> number, the total number of transmissions, the number of frames which
>>> ended their transmissions at this number (i.e. where the chain ended
>>> its final transmission - this is equivalent to the retransmission
>>> value from the Radiotap header + 1), and the average expected
>>> transmission time for all that particular transmission number in all
>>> chains. This is calculated using the airtime calculations from the
>>> 802.11a standard, with the receipt of an ACK frame, as well as a SIFS
>>> (16us), which is 28us. If the transmission did not receive an ACK, a
>>> normal ACK timeout is 50 us, but ath5k appears to have this set to 25
>>> us, so the value shouldn't be too far out what to expect.
>>>
>>> The header to each result refers to the rate it was fixed at, as well
>>> as the variable attenuation being added to it. The link also has a
>>> fixed 40dB of attenuation both to protect the cards, as well as give
>>> the necessary range for the variable attenuator to control link
>>> quality.
>>>
>>> ==> iperf_33M_rate_36M_att_1dB.pcap.txt <== (good link, 100% delivery)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAvg
>>> 1             477.604980      10463   10462           509
>>> Overall average: 477.604980
>>>
>>> [Discussion:] Nothing, appears normal.
>>>
>>>
>>> ==> iperf_33M_rate_36M_att_18dB.pcap.txt <== (lossy link, but still
>>> 100% delivery)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAvg
>>> 1             476.966766      9808            8138    509
>>> 2             550.320496      1663            1403    581
>>> 3             697.552917      255             218     725
>>> 4             1028.756714     37              30              1013
>>> 5             1603.428589     7               7               1589
>>> Overall average: 494.514618
>>>
>>> [Discussion:] Nothing, appears normal. Contention window appears to
>>> double normally.
>>>
>>> ==> iperf_33M_rate_36M_att_19dB.pcap.txt <== (lossy link, but still
>>> 100% delivery)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAvg
>>> 1             477.510437      14893   8653    509
>>> 2             546.149048      6205            3624    581
>>> 3             692.270203      2561            1552    725
>>> 4             980.565857      1002            596     1013
>>> 5             1542.079956     400             252     1589
>>> 6             2758.693848     147             89              2741
>>> 7             4971.500000     56              32              5045
>>> 8             4689.043457     23              15              5045
>>> 9             4487.856934     7               3               5045
>>> 10            442.250000      4               3               5045
>>> 11            488.000000      1               1               5045
>>> Overall average: 580.976807
>>>
>>> [Discussion:] Contention window appears to double until a plateau from
>>> 7 through 9. Weirdly, the contention window appears to be drop again
>>> from 10, but
>>> there are too few frames to draw a conclusion.
>>>
>>> ==> iperf_33M_rate_36M_att_21dB.pcap.txt <== (lossy link, < 1% delivery)
>>> TXNo  Avg                     No              Final   ExpectedAvg
>>> 1             485.390198      1940            3          509
>>> 2             479.113434      1922            2          581
>>> 3             479.681824      1914            0          725
>>> 4             485.083038      1903            1          1013
>>> 5             492.088135      1895            4          1589
>>> 6             508.322510      1876            1          2741
>>> 7             524.697876      1870            1          5045
>>> 8             543.054382      1857            0          5045
>>> 9             522.970703      1842            0          5045
>>> 10            478.204132      1837            0          5045
>>> 11            476.520782      1828            0          5045
>>> 12            477.531342      1818            0          5045
>>> 13            476.743652      1810            0          5045
>>> 14            478.936554      1797            0          5045
>>> 15            480.699097      1788            0          5045
>>> 16            482.734314      1784            0          5045
>>> 17            491.608459      1775            0          5045
>>> 18            497.458984      1767            1          5045
>>> 19            495.067932      1752            7          5045
>>> 20            478.102417      1738            295     5045
>>> 21            475.128845      1436            1402   5045
>>> 22            492.692322      26              0          5045
>>> 23            471.576935      26              0          5045
>>> 24            466.884613      26              0          5045
>>> 25            476.269226      26              0          5045
>>> 26            462.192322      26              0          5045
>>> 27            480.961548      26              1          5045
>>> 28            463.600006      25              24         5045
>>> Overall average: 491.068359
>>>
>>> [Discussion:] Contention does not appear to increase, and the number
>>> of transmission per frame is very large. This behaviour is replicated
>>> with the 54M scenario when a link is extremely lossy.
>>>
>>> ==> iperf_33M_rate_54M_att_1dB.pcap.txt <== (good link, 2400 packets/s)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAverage
>>> 1             365.551849      23957   23935           393
>>> 2             409.571442      21              21              465
>>> Overall average: 365.590424
>>>
>>> [Discussion: ] Appears relatively normal.
>>>
>>> ==> iperf_33M_rate_54M_att_10dB.pcap.txt <== (lossy link, but still
>>> 100% delivery, 1500 packets/s)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAverage
>>> 1             364.501190      10134   5915    393
>>> 2             434.138000      4196            2461    465
>>> 3             579.482300      1721            1036    609
>>> 4             837.005859      682             397     897
>>> 5             1365.279175     283             155     1473
>>> 6             2572.007812     128             81              2625
>>> 7             4905.195801     46              27              4929
>>> 8             4985.947266     19              12              4929
>>> 9             4627.285645     7               4               4929
>>> 10            366.000000      3               1               4929
>>> 11            335.500000      2               2               4929
>>> Overall average: 473.477020
>>>
>>> [Discussion: ] Appears fine, until transmission 10, which appears to
>>> drop the contention window back to an equivalent first transmission
>>> value, but not enough frames at this point to draw a conclusion.
>>>
>>> ==> iperf_33M_rate_54M_att_11dB.pcap.txt <== (lossy link, but still
>>> 100% delivery, 680 packets/s)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAverage
>>> 1             362.082825      2149            539     393
>>> 2             434.672485      1606            368     465
>>> 3             582.795288      1231            307     609
>>> 4             820.347107      919             237     897
>>> 5             1424.753296     673             194     1473
>>> 6             2626.403320     466             143     2625
>>> 7             4734.233887     308             83              4929
>>> 8             4830.244141     217             65              4929
>>> 9             4449.702637     148             33              4929
>>> 10            360.114044      114             36              4929
>>> 11            366.000000      78              20              4929
>>> 12            460.655182      58              20              4929
>>> 13            544.184204      38              9               4929
>>> 14            893.965515      29              7               4929
>>> 15            1361.409058     22              8               4929
>>> 16            2675.285645     14              2               4929
>>> 17            4239.500000     12              5               4929
>>> 18            3198.142822     7               2               4929
>>> 19            5111.799805     5               3               4929
>>> 20            1403.000000     2               1               4929
>>> Overall average: 1063.129883
>>>
>>> [Discussion: ] Everything appears fine until, once again, transmission
>>> 10, when the contention windows appears to 'restart' - it climbs
>>> steadily until 17. After this point, there are not enough frames to
>>> draw any conclusions.
>>>
>>> ==> iperf_33M_rate_54M_att_12dB.pcap.txt <== (lossy link, 6% delivery,
>>> 400 packets/s)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAvg
>>> 1             360.460724      4482            14              393
>>> 2             366.068481      4453            16              465
>>> 3             360.871735      4413            13              609
>>> 4             361.535553      4386            18              897
>>> 5             367.526062      4357            60              1473
>>> 6             360.003967      4283            3839    2625
>>> 7             361.778046      419             416     4929
>>> Overall average: 362.732910
>>>
>>> [Discussion:] This exhibits the same problem as the extremely lossy
>>> 36M link - the contention window does not appear to rise. Even with
>>> enough frames to draw a good conclusion at transmission 6, the
>>> transmission time average (360) is way below the expected average
>>> (2625).
>>> ==> END OF OUTPUT <==
>>>
>>> The question here is: why does ath5k/mac80211 send out so many
>>> transmissions, and why does it vary so much based on link quality?
>>> Additionally, why does it appear to 'reset' the contention window
>>> after 9 retransmissions of a frame?
>>>
>>> Cheers,
>>>
>>> Jonathan
>>
>> Hi Jonathan!
>>
>> This is a very interesting setup and test. I guess nobody has looked so
>> closely yet... I think this is not necessarily ath5k related, but may be a bug
>> of mac80211 or minstrel, but not sure yet, of course...
>>
>> It's normal, that the CW is reset after the retry limits are reached, this is
>> what the standard says:
>>
>> "The CW shall be reset to aCWmin after every successful attempt to transmit an
>> MPDU or MMPDU, when SLRC reaches dot11LongRetryLimit, or when SSRC reaches
>> dot11ShortRetryLimit." (802.11-2007 p261)
>>
>> But it seems weird that there are so many retransmissions. The default maximum
>> numbers of retransmissions should be 7 for short frames and 4 for long frames
>> (dot11[Short|Long]RetryLimit), and this is what is set as defaults in mac80211
>> (local->hw.conf.short_frame_max_tx_count). Seems we are getting many
>> retransmissions from minstel, i added some debug prints:
>>
>
> When ath5k doesn't get retry limits from above it uses the following
> defaults on dcu.
> For now i don't think we use local->hw.conf.short_frame_max_tx_count
> for that so the
> default is ah_limit_tx_retries (AR5K_INIT_TX_RETRY) but seems it's
> wrong and we should
> fix it...
>
> /* Tx retry limits */
> #define AR5K_INIT_SH_RETRY                      10
> #define AR5K_INIT_LG_RETRY                      AR5K_INIT_SH_RETRY
> /* For station mode */
> #define AR5K_INIT_SSH_RETRY                     32
> #define AR5K_INIT_SLG_RETRY                     AR5K_INIT_SSH_RETRY
> #define AR5K_INIT_TX_RETRY                      10
>
>> *** txdesc tries 3
>> *** mrr 0 tries 3 rate 11
>> *** mrr 1 tries 3 rate 11
>> *** mrr 2 tries 3 rate 11
>>
>> This seems to be the normal case and that would already result in 12
>> transmissions.
>>
>> Another thing that strikes me here is: why use multi rate retries if the rate
>> is all the same? (Ignore the actual value of the rate, this is the HW rate
>> code).
>>
>> Other examples:
>>
>> *** txdesc tries 2
>> *** mrr 0 tries 9 rate 12
>> *** mrr 1 tries 2 rate 13
>> *** mrr 2 tries 3 rate 11
>>
>> = 16 transmissions in sum.
>>
>> *** txdesc tries 9
>> *** mrr 0 tries 3 rate 11
>> *** mrr 1 tries 9 rate 8
>> *** mrr 2 tries 3 rate 11
>>
>> = 24 transmissions in sum. Again, rate[1] and rate[3] are the same, so why
>> bother setting it up twice?
>>
>> bruno
>> _______________________________________________
>> ath5k-devel mailing list
>> ath5k-devel@lists.ath5k.org
>> https://lists.ath5k.org/mailman/listinfo/ath5k-devel
>>
>
> Also on base.c
>
> 2408         /* set up multi-rate retry capabilities */
> 2409         if (sc->ah->ah_version == AR5K_AR5212) {
> 2410                 hw->max_rates = 4;
> 2411                 hw->max_rate_tries = 11;
> 2412         }
>
>
>
> --
> GPG ID: 0xD21DB2DB
> As you read this post global entropy rises. Have Fun ;-)
> Nick
>

You mean sth. like the attached patch?

- Sedat -

[-- Attachment #2: ath5k-Set-AR5K_INIT_TX_RETRY-and-max_rate_tries-to-3.patch --]
[-- Type: plain/text, Size: 983 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-06  8:14 ` Bruno Randolf
  2010-12-06  9:36   ` [ath5k-devel] " Nick Kossifidis
@ 2010-12-06 18:01   ` Björn Smedman
  2010-12-07  1:19     ` Jonathan Guerin
  2010-12-07  1:12   ` Jonathan Guerin
  2010-12-08 16:08   ` Bob Copeland
  3 siblings, 1 reply; 27+ messages in thread
From: Björn Smedman @ 2010-12-06 18:01 UTC (permalink / raw)
  To: Jonathan Guerin; +Cc: Bruno Randolf, linux-wireless, ath5k-devel, nbd

On Mon, Dec 6, 2010 at 9:14 AM, Bruno Randolf <br1@einfach.org> wrote:
[snip]
>
> Other examples:
>
> *** txdesc tries 2
> *** mrr 0 tries 9 rate 12
> *** mrr 1 tries 2 rate 13
> *** mrr 2 tries 3 rate 11
>
> = 16 transmissions in sum.
>
> *** txdesc tries 9
> *** mrr 0 tries 3 rate 11
> *** mrr 1 tries 9 rate 8
> *** mrr 2 tries 3 rate 11
>
> = 24 transmissions in sum. Again, rate[1] and rate[3] are the same, so why
> bother setting it up twice?

I remember from experimenting with rate control in madwifi that weird
stuff can happens when you go above ATH_TXMAXTRY = 13 tx attempts in
total (all mrr segments combined). We thought we saw some significant
improvement on poor links when we reduced retries to fit the whole mrr
chain into 13 retries in total, but we didn't have the equipment to
really verify that. Perhaps something you could try Jonathan in your
excellent testbed?

/Björn

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-06  8:14 ` Bruno Randolf
  2010-12-06  9:36   ` [ath5k-devel] " Nick Kossifidis
  2010-12-06 18:01   ` Björn Smedman
@ 2010-12-07  1:12   ` Jonathan Guerin
  2010-12-07  2:34     ` Bruno Randolf
  2010-12-08 16:08   ` Bob Copeland
  3 siblings, 1 reply; 27+ messages in thread
From: Jonathan Guerin @ 2010-12-07  1:12 UTC (permalink / raw)
  To: Bruno Randolf; +Cc: linux-wireless, ath5k-devel, nbd

On Mon, Dec 6, 2010 at 6:14 PM, Bruno Randolf <br1@einfach.org> wrote:
> On Mon December 6 2010 15:30:00 Jonathan Guerin wrote:
>> Hi,
>>
>>
>> I've been doing some investigation into the behaviour of contention
>> windows and retransmissions.
>>
>> Firstly, I'll just describe the test scenario and setup that I have. I
>> have 3 Via x86 nodes with Atheros AR5001X+ cards. They are tethered to
>> each other via coaxial cables, into splitters. They have 20dB of fixed
>> attenuation applied to each antenna output, plus a programmable
>> variable attenuator on each link. One node acts as a sender, one as a
>> receiver, and one simply runs a monitor-mode interface to capture
>> packet traces. All 3 are running kernel version 2.6.37-rc2. The sender
>> and receiver are configured as IBSS stations and are tuned to 5.18
>> GHz.
>>
>> Here's a really dodgy ASCII diagram of the setup:
>>
>> S-----[variable attenuator]-----R
>>
>>
>>
>> +------------M-------------------------+
>>
>> where S is the Sender node, R is the Receiver node and M is the
>> Monitoring capture node.
>>
>>
>> Secondly, I have written a program which will parse a captured pcap
>> file from the Monitoring station. It looks for 'chains' of frames with
>> the same sequence number, and where the first frame has the Retry bit
>> set to false in the header and all following have it set to true. Any
>> deviation from this, and the program drops the current chain without
>> including it in its stats, and looks for the next chain matching these
>> requirements. It averages the amount of time per transmission number
>> (i.e. the average of all transmissions which were the first, second,
>> third etc. for a unique sequence number). The transmission time of a
>> frame is the amount of time between the end of the frame and the end
>> of the previous. It tracks these 'chains' of frames with the same
>> sequence number. It considers the last transmission number in each
>> chain as the 'final' transmission.
>>
>> Finally, the link is loaded using a saturated UDP flow, and the data
>> rate is fixed to 54M and 36M. This is specified in the output. The
>> output is attached below.
>>
>> The output describes the fixed link data rate, the variable
>> attenuator's value, the delivery ratio, and the number of transmitted
>> packets/s. I've added a discussion per result set. Each line outputs
>> the transmission number, the average transmission time for this
>> number, the total number of transmissions, the number of frames which
>> ended their transmissions at this number (i.e. where the chain ended
>> its final transmission - this is equivalent to the retransmission
>> value from the Radiotap header + 1), and the average expected
>> transmission time for all that particular transmission number in all
>> chains. This is calculated using the airtime calculations from the
>> 802.11a standard, with the receipt of an ACK frame, as well as a SIFS
>> (16us), which is 28us. If the transmission did not receive an ACK, a
>> normal ACK timeout is 50 us, but ath5k appears to have this set to 25
>> us, so the value shouldn't be too far out what to expect.
>>
>> The header to each result refers to the rate it was fixed at, as well
>> as the variable attenuation being added to it. The link also has a
>> fixed 40dB of attenuation both to protect the cards, as well as give
>> the necessary range for the variable attenuator to control link
>> quality.
>>
>> ==> iperf_33M_rate_36M_att_1dB.pcap.txt <== (good link, 100% delivery)
>> Average time per TX No:
>> TXNo  Avg                     No              Final           ExpectedAvg
>> 1             477.604980      10463   10462           509
>> Overall average: 477.604980
>>
>> [Discussion:] Nothing, appears normal.
>>
>>
>> ==> iperf_33M_rate_36M_att_18dB.pcap.txt <== (lossy link, but still
>> 100% delivery)
>> Average time per TX No:
>> TXNo  Avg                     No              Final           ExpectedAvg
>> 1             476.966766      9808            8138    509
>> 2             550.320496      1663            1403    581
>> 3             697.552917      255             218     725
>> 4             1028.756714     37              30              1013
>> 5             1603.428589     7               7               1589
>> Overall average: 494.514618
>>
>> [Discussion:] Nothing, appears normal. Contention window appears to
>> double normally.
>>
>> ==> iperf_33M_rate_36M_att_19dB.pcap.txt <== (lossy link, but still
>> 100% delivery)
>> Average time per TX No:
>> TXNo  Avg                     No              Final           ExpectedAvg
>> 1             477.510437      14893   8653    509
>> 2             546.149048      6205            3624    581
>> 3             692.270203      2561            1552    725
>> 4             980.565857      1002            596     1013
>> 5             1542.079956     400             252     1589
>> 6             2758.693848     147             89              2741
>> 7             4971.500000     56              32              5045
>> 8             4689.043457     23              15              5045
>> 9             4487.856934     7               3               5045
>> 10            442.250000      4               3               5045
>> 11            488.000000      1               1               5045
>> Overall average: 580.976807
>>
>> [Discussion:] Contention window appears to double until a plateau from
>> 7 through 9. Weirdly, the contention window appears to be drop again
>> from 10, but
>> there are too few frames to draw a conclusion.
>>
>> ==> iperf_33M_rate_36M_att_21dB.pcap.txt <== (lossy link, < 1% delivery)
>> TXNo  Avg                     No              Final   ExpectedAvg
>> 1             485.390198      1940            3          509
>> 2             479.113434      1922            2          581
>> 3             479.681824      1914            0          725
>> 4             485.083038      1903            1          1013
>> 5             492.088135      1895            4          1589
>> 6             508.322510      1876            1          2741
>> 7             524.697876      1870            1          5045
>> 8             543.054382      1857            0          5045
>> 9             522.970703      1842            0          5045
>> 10            478.204132      1837            0          5045
>> 11            476.520782      1828            0          5045
>> 12            477.531342      1818            0          5045
>> 13            476.743652      1810            0          5045
>> 14            478.936554      1797            0          5045
>> 15            480.699097      1788            0          5045
>> 16            482.734314      1784            0          5045
>> 17            491.608459      1775            0          5045
>> 18            497.458984      1767            1          5045
>> 19            495.067932      1752            7          5045
>> 20            478.102417      1738            295     5045
>> 21            475.128845      1436            1402   5045
>> 22            492.692322      26              0          5045
>> 23            471.576935      26              0          5045
>> 24            466.884613      26              0          5045
>> 25            476.269226      26              0          5045
>> 26            462.192322      26              0          5045
>> 27            480.961548      26              1          5045
>> 28            463.600006      25              24         5045
>> Overall average: 491.068359
>>
>> [Discussion:] Contention does not appear to increase, and the number
>> of transmission per frame is very large. This behaviour is replicated
>> with the 54M scenario when a link is extremely lossy.
>>
>> ==> iperf_33M_rate_54M_att_1dB.pcap.txt <== (good link, 2400 packets/s)
>> Average time per TX No:
>> TXNo  Avg                     No              Final           ExpectedAverage
>> 1             365.551849      23957   23935           393
>> 2             409.571442      21              21              465
>> Overall average: 365.590424
>>
>> [Discussion: ] Appears relatively normal.
>>
>> ==> iperf_33M_rate_54M_att_10dB.pcap.txt <== (lossy link, but still
>> 100% delivery, 1500 packets/s)
>> Average time per TX No:
>> TXNo  Avg                     No              Final           ExpectedAverage
>> 1             364.501190      10134   5915    393
>> 2             434.138000      4196            2461    465
>> 3             579.482300      1721            1036    609
>> 4             837.005859      682             397     897
>> 5             1365.279175     283             155     1473
>> 6             2572.007812     128             81              2625
>> 7             4905.195801     46              27              4929
>> 8             4985.947266     19              12              4929
>> 9             4627.285645     7               4               4929
>> 10            366.000000      3               1               4929
>> 11            335.500000      2               2               4929
>> Overall average: 473.477020
>>
>> [Discussion: ] Appears fine, until transmission 10, which appears to
>> drop the contention window back to an equivalent first transmission
>> value, but not enough frames at this point to draw a conclusion.
>>
>> ==> iperf_33M_rate_54M_att_11dB.pcap.txt <== (lossy link, but still
>> 100% delivery, 680 packets/s)
>> Average time per TX No:
>> TXNo  Avg                     No              Final           ExpectedAverage
>> 1             362.082825      2149            539     393
>> 2             434.672485      1606            368     465
>> 3             582.795288      1231            307     609
>> 4             820.347107      919             237     897
>> 5             1424.753296     673             194     1473
>> 6             2626.403320     466             143     2625
>> 7             4734.233887     308             83              4929
>> 8             4830.244141     217             65              4929
>> 9             4449.702637     148             33              4929
>> 10            360.114044      114             36              4929
>> 11            366.000000      78              20              4929
>> 12            460.655182      58              20              4929
>> 13            544.184204      38              9               4929
>> 14            893.965515      29              7               4929
>> 15            1361.409058     22              8               4929
>> 16            2675.285645     14              2               4929
>> 17            4239.500000     12              5               4929
>> 18            3198.142822     7               2               4929
>> 19            5111.799805     5               3               4929
>> 20            1403.000000     2               1               4929
>> Overall average: 1063.129883
>>
>> [Discussion: ] Everything appears fine until, once again, transmission
>> 10, when the contention windows appears to 'restart' - it climbs
>> steadily until 17. After this point, there are not enough frames to
>> draw any conclusions.
>>
>> ==> iperf_33M_rate_54M_att_12dB.pcap.txt <== (lossy link, 6% delivery,
>> 400 packets/s)
>> Average time per TX No:
>> TXNo  Avg                     No              Final           ExpectedAvg
>> 1             360.460724      4482            14              393
>> 2             366.068481      4453            16              465
>> 3             360.871735      4413            13              609
>> 4             361.535553      4386            18              897
>> 5             367.526062      4357            60              1473
>> 6             360.003967      4283            3839    2625
>> 7             361.778046      419             416     4929
>> Overall average: 362.732910
>>
>> [Discussion:] This exhibits the same problem as the extremely lossy
>> 36M link - the contention window does not appear to rise. Even with
>> enough frames to draw a good conclusion at transmission 6, the
>> transmission time average (360) is way below the expected average
>> (2625).
>> ==> END OF OUTPUT <==
>>
>> The question here is: why does ath5k/mac80211 send out so many
>> transmissions, and why does it vary so much based on link quality?
>> Additionally, why does it appear to 'reset' the contention window
>> after 9 retransmissions of a frame?
>>
>> Cheers,
>>
>> Jonathan
>
> Hi Jonathan!
>
> This is a very interesting setup and test. I guess nobody has looked so
> closely yet... I think this is not necessarily ath5k related, but may be a bug
> of mac80211 or minstrel, but not sure yet, of course...
>
> It's normal, that the CW is reset after the retry limits are reached, this is
> what the standard says:
>
> "The CW shall be reset to aCWmin after every successful attempt to transmit an
> MPDU or MMPDU, when SLRC reaches dot11LongRetryLimit, or when SSRC reaches
> dot11ShortRetryLimit." (802.11-2007 p261)

Good point, I forgot to check the standard on this!

>
> But it seems weird that there are so many retransmissions. The default maximum
> numbers of retransmissions should be 7 for short frames and 4 for long frames
> (dot11[Short|Long]RetryLimit), and this is what is set as defaults in mac80211
> (local->hw.conf.short_frame_max_tx_count). Seems we are getting many
> retransmissions from minstel, i added some debug prints:
>
> *** txdesc tries 3
> *** mrr 0 tries 3 rate 11
> *** mrr 1 tries 3 rate 11
> *** mrr 2 tries 3 rate 11
>
> This seems to be the normal case and that would already result in 12
> transmissions.
>
> Another thing that strikes me here is: why use multi rate retries if the rate
> is all the same? (Ignore the actual value of the rate, this is the HW rate
> code).
>
> Other examples:
>
> *** txdesc tries 2
> *** mrr 0 tries 9 rate 12
> *** mrr 1 tries 2 rate 13
> *** mrr 2 tries 3 rate 11
>
> = 16 transmissions in sum.
>
> *** txdesc tries 9
> *** mrr 0 tries 3 rate 11
> *** mrr 1 tries 9 rate 8
> *** mrr 2 tries 3 rate 11
>
> = 24 transmissions in sum. Again, rate[1] and rate[3] are the same, so why
> bother setting it up twice?

I'm not sure if you still had a fixed rate set here - and I don't know
100% how minstrel works - but it could be that minstrel is trying to
do some probing for better rates (if it was set to auto-rate)?

>
> bruno
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [ath5k-devel] ath5k: Weird Retransmission Behaviour
  2010-12-06  9:36   ` [ath5k-devel] " Nick Kossifidis
  2010-12-06 10:53     ` Sedat Dilek
@ 2010-12-07  1:17     ` Jonathan Guerin
  2010-12-08  8:06       ` Bruno Randolf
  1 sibling, 1 reply; 27+ messages in thread
From: Jonathan Guerin @ 2010-12-07  1:17 UTC (permalink / raw)
  To: Nick Kossifidis; +Cc: Bruno Randolf, ath5k-devel, linux-wireless

On Mon, Dec 6, 2010 at 7:36 PM, Nick Kossifidis <mickflemm@gmail.com> wrote:
> 2010/12/6 Bruno Randolf <br1@einfach.org>:
>> On Mon December 6 2010 15:30:00 Jonathan Guerin wrote:
>>> Hi,
>>>
>>>
>>> I've been doing some investigation into the behaviour of contention
>>> windows and retransmissions.
>>>
>>> Firstly, I'll just describe the test scenario and setup that I have. I
>>> have 3 Via x86 nodes with Atheros AR5001X+ cards. They are tethered to
>>> each other via coaxial cables, into splitters. They have 20dB of fixed
>>> attenuation applied to each antenna output, plus a programmable
>>> variable attenuator on each link. One node acts as a sender, one as a
>>> receiver, and one simply runs a monitor-mode interface to capture
>>> packet traces. All 3 are running kernel version 2.6.37-rc2. The sender
>>> and receiver are configured as IBSS stations and are tuned to 5.18
>>> GHz.
>>>
>>> Here's a really dodgy ASCII diagram of the setup:
>>>
>>> S-----[variable attenuator]-----R
>>>
>>>
>>>
>>> +------------M-------------------------+
>>>
>>> where S is the Sender node, R is the Receiver node and M is the
>>> Monitoring capture node.
>>>
>>>
>>> Secondly, I have written a program which will parse a captured pcap
>>> file from the Monitoring station. It looks for 'chains' of frames with
>>> the same sequence number, and where the first frame has the Retry bit
>>> set to false in the header and all following have it set to true. Any
>>> deviation from this, and the program drops the current chain without
>>> including it in its stats, and looks for the next chain matching these
>>> requirements. It averages the amount of time per transmission number
>>> (i.e. the average of all transmissions which were the first, second,
>>> third etc. for a unique sequence number). The transmission time of a
>>> frame is the amount of time between the end of the frame and the end
>>> of the previous. It tracks these 'chains' of frames with the same
>>> sequence number. It considers the last transmission number in each
>>> chain as the 'final' transmission.
>>>
>>> Finally, the link is loaded using a saturated UDP flow, and the data
>>> rate is fixed to 54M and 36M. This is specified in the output. The
>>> output is attached below.
>>>
>>> The output describes the fixed link data rate, the variable
>>> attenuator's value, the delivery ratio, and the number of transmitted
>>> packets/s. I've added a discussion per result set. Each line outputs
>>> the transmission number, the average transmission time for this
>>> number, the total number of transmissions, the number of frames which
>>> ended their transmissions at this number (i.e. where the chain ended
>>> its final transmission - this is equivalent to the retransmission
>>> value from the Radiotap header + 1), and the average expected
>>> transmission time for all that particular transmission number in all
>>> chains. This is calculated using the airtime calculations from the
>>> 802.11a standard, with the receipt of an ACK frame, as well as a SIFS
>>> (16us), which is 28us. If the transmission did not receive an ACK, a
>>> normal ACK timeout is 50 us, but ath5k appears to have this set to 25
>>> us, so the value shouldn't be too far out what to expect.
>>>
>>> The header to each result refers to the rate it was fixed at, as well
>>> as the variable attenuation being added to it. The link also has a
>>> fixed 40dB of attenuation both to protect the cards, as well as give
>>> the necessary range for the variable attenuator to control link
>>> quality.
>>>
>>> ==> iperf_33M_rate_36M_att_1dB.pcap.txt <== (good link, 100% delivery)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAvg
>>> 1             477.604980      10463   10462           509
>>> Overall average: 477.604980
>>>
>>> [Discussion:] Nothing, appears normal.
>>>
>>>
>>> ==> iperf_33M_rate_36M_att_18dB.pcap.txt <== (lossy link, but still
>>> 100% delivery)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAvg
>>> 1             476.966766      9808            8138    509
>>> 2             550.320496      1663            1403    581
>>> 3             697.552917      255             218     725
>>> 4             1028.756714     37              30              1013
>>> 5             1603.428589     7               7               1589
>>> Overall average: 494.514618
>>>
>>> [Discussion:] Nothing, appears normal. Contention window appears to
>>> double normally.
>>>
>>> ==> iperf_33M_rate_36M_att_19dB.pcap.txt <== (lossy link, but still
>>> 100% delivery)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAvg
>>> 1             477.510437      14893   8653    509
>>> 2             546.149048      6205            3624    581
>>> 3             692.270203      2561            1552    725
>>> 4             980.565857      1002            596     1013
>>> 5             1542.079956     400             252     1589
>>> 6             2758.693848     147             89              2741
>>> 7             4971.500000     56              32              5045
>>> 8             4689.043457     23              15              5045
>>> 9             4487.856934     7               3               5045
>>> 10            442.250000      4               3               5045
>>> 11            488.000000      1               1               5045
>>> Overall average: 580.976807
>>>
>>> [Discussion:] Contention window appears to double until a plateau from
>>> 7 through 9. Weirdly, the contention window appears to be drop again
>>> from 10, but
>>> there are too few frames to draw a conclusion.
>>>
>>> ==> iperf_33M_rate_36M_att_21dB.pcap.txt <== (lossy link, < 1% delivery)
>>> TXNo  Avg                     No              Final   ExpectedAvg
>>> 1             485.390198      1940            3          509
>>> 2             479.113434      1922            2          581
>>> 3             479.681824      1914            0          725
>>> 4             485.083038      1903            1          1013
>>> 5             492.088135      1895            4          1589
>>> 6             508.322510      1876            1          2741
>>> 7             524.697876      1870            1          5045
>>> 8             543.054382      1857            0          5045
>>> 9             522.970703      1842            0          5045
>>> 10            478.204132      1837            0          5045
>>> 11            476.520782      1828            0          5045
>>> 12            477.531342      1818            0          5045
>>> 13            476.743652      1810            0          5045
>>> 14            478.936554      1797            0          5045
>>> 15            480.699097      1788            0          5045
>>> 16            482.734314      1784            0          5045
>>> 17            491.608459      1775            0          5045
>>> 18            497.458984      1767            1          5045
>>> 19            495.067932      1752            7          5045
>>> 20            478.102417      1738            295     5045
>>> 21            475.128845      1436            1402   5045
>>> 22            492.692322      26              0          5045
>>> 23            471.576935      26              0          5045
>>> 24            466.884613      26              0          5045
>>> 25            476.269226      26              0          5045
>>> 26            462.192322      26              0          5045
>>> 27            480.961548      26              1          5045
>>> 28            463.600006      25              24         5045
>>> Overall average: 491.068359
>>>
>>> [Discussion:] Contention does not appear to increase, and the number
>>> of transmission per frame is very large. This behaviour is replicated
>>> with the 54M scenario when a link is extremely lossy.
>>>
>>> ==> iperf_33M_rate_54M_att_1dB.pcap.txt <== (good link, 2400 packets/s)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAverage
>>> 1             365.551849      23957   23935           393
>>> 2             409.571442      21              21              465
>>> Overall average: 365.590424
>>>
>>> [Discussion: ] Appears relatively normal.
>>>
>>> ==> iperf_33M_rate_54M_att_10dB.pcap.txt <== (lossy link, but still
>>> 100% delivery, 1500 packets/s)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAverage
>>> 1             364.501190      10134   5915    393
>>> 2             434.138000      4196            2461    465
>>> 3             579.482300      1721            1036    609
>>> 4             837.005859      682             397     897
>>> 5             1365.279175     283             155     1473
>>> 6             2572.007812     128             81              2625
>>> 7             4905.195801     46              27              4929
>>> 8             4985.947266     19              12              4929
>>> 9             4627.285645     7               4               4929
>>> 10            366.000000      3               1               4929
>>> 11            335.500000      2               2               4929
>>> Overall average: 473.477020
>>>
>>> [Discussion: ] Appears fine, until transmission 10, which appears to
>>> drop the contention window back to an equivalent first transmission
>>> value, but not enough frames at this point to draw a conclusion.
>>>
>>> ==> iperf_33M_rate_54M_att_11dB.pcap.txt <== (lossy link, but still
>>> 100% delivery, 680 packets/s)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAverage
>>> 1             362.082825      2149            539     393
>>> 2             434.672485      1606            368     465
>>> 3             582.795288      1231            307     609
>>> 4             820.347107      919             237     897
>>> 5             1424.753296     673             194     1473
>>> 6             2626.403320     466             143     2625
>>> 7             4734.233887     308             83              4929
>>> 8             4830.244141     217             65              4929
>>> 9             4449.702637     148             33              4929
>>> 10            360.114044      114             36              4929
>>> 11            366.000000      78              20              4929
>>> 12            460.655182      58              20              4929
>>> 13            544.184204      38              9               4929
>>> 14            893.965515      29              7               4929
>>> 15            1361.409058     22              8               4929
>>> 16            2675.285645     14              2               4929
>>> 17            4239.500000     12              5               4929
>>> 18            3198.142822     7               2               4929
>>> 19            5111.799805     5               3               4929
>>> 20            1403.000000     2               1               4929
>>> Overall average: 1063.129883
>>>
>>> [Discussion: ] Everything appears fine until, once again, transmission
>>> 10, when the contention windows appears to 'restart' - it climbs
>>> steadily until 17. After this point, there are not enough frames to
>>> draw any conclusions.
>>>
>>> ==> iperf_33M_rate_54M_att_12dB.pcap.txt <== (lossy link, 6% delivery,
>>> 400 packets/s)
>>> Average time per TX No:
>>> TXNo  Avg                     No              Final           ExpectedAvg
>>> 1             360.460724      4482            14              393
>>> 2             366.068481      4453            16              465
>>> 3             360.871735      4413            13              609
>>> 4             361.535553      4386            18              897
>>> 5             367.526062      4357            60              1473
>>> 6             360.003967      4283            3839    2625
>>> 7             361.778046      419             416     4929
>>> Overall average: 362.732910
>>>
>>> [Discussion:] This exhibits the same problem as the extremely lossy
>>> 36M link - the contention window does not appear to rise. Even with
>>> enough frames to draw a good conclusion at transmission 6, the
>>> transmission time average (360) is way below the expected average
>>> (2625).
>>> ==> END OF OUTPUT <==
>>>
>>> The question here is: why does ath5k/mac80211 send out so many
>>> transmissions, and why does it vary so much based on link quality?
>>> Additionally, why does it appear to 'reset' the contention window
>>> after 9 retransmissions of a frame?
>>>
>>> Cheers,
>>>
>>> Jonathan
>>
>> Hi Jonathan!
>>
>> This is a very interesting setup and test. I guess nobody has looked so
>> closely yet... I think this is not necessarily ath5k related, but may be a bug
>> of mac80211 or minstrel, but not sure yet, of course...
>>
>> It's normal, that the CW is reset after the retry limits are reached, this is
>> what the standard says:
>>
>> "The CW shall be reset to aCWmin after every successful attempt to transmit an
>> MPDU or MMPDU, when SLRC reaches dot11LongRetryLimit, or when SSRC reaches
>> dot11ShortRetryLimit." (802.11-2007 p261)
>>
>> But it seems weird that there are so many retransmissions. The default maximum
>> numbers of retransmissions should be 7 for short frames and 4 for long frames
>> (dot11[Short|Long]RetryLimit), and this is what is set as defaults in mac80211
>> (local->hw.conf.short_frame_max_tx_count). Seems we are getting many
>> retransmissions from minstel, i added some debug prints:
>>
>
> When ath5k doesn't get retry limits from above it uses the following
> defaults on dcu.
> For now i don't think we use local->hw.conf.short_frame_max_tx_count
> for that so the
> default is ah_limit_tx_retries (AR5K_INIT_TX_RETRY) but seems it's
> wrong and we should
> fix it...
>
> /* Tx retry limits */
> #define AR5K_INIT_SH_RETRY                      10
> #define AR5K_INIT_LG_RETRY                      AR5K_INIT_SH_RETRY

This definitely should explain the behaviour where the Contention
Window is reset when the card is instructed to keep retransmitting
past this value! Why do you not think we should use the value from
mac80211? It seems that ministrel should be aware of this maximum
value and should not instruct a card to go beyond it?

> /* For station mode */
> #define AR5K_INIT_SSH_RETRY                     32
> #define AR5K_INIT_SLG_RETRY                     AR5K_INIT_SSH_RETRY
> #define AR5K_INIT_TX_RETRY                      10
>
>> *** txdesc tries 3
>> *** mrr 0 tries 3 rate 11
>> *** mrr 1 tries 3 rate 11
>> *** mrr 2 tries 3 rate 11
>>
>> This seems to be the normal case and that would already result in 12
>> transmissions.
>>
>> Another thing that strikes me here is: why use multi rate retries if the rate
>> is all the same? (Ignore the actual value of the rate, this is the HW rate
>> code).
>>
>> Other examples:
>>
>> *** txdesc tries 2
>> *** mrr 0 tries 9 rate 12
>> *** mrr 1 tries 2 rate 13
>> *** mrr 2 tries 3 rate 11
>>
>> = 16 transmissions in sum.
>>
>> *** txdesc tries 9
>> *** mrr 0 tries 3 rate 11
>> *** mrr 1 tries 9 rate 8
>> *** mrr 2 tries 3 rate 11
>>
>> = 24 transmissions in sum. Again, rate[1] and rate[3] are the same, so why
>> bother setting it up twice?
>>
>> bruno
>> _______________________________________________
>> ath5k-devel mailing list
>> ath5k-devel@lists.ath5k.org
>> https://lists.ath5k.org/mailman/listinfo/ath5k-devel
>>
>
> Also on base.c
>
> 2408         /* set up multi-rate retry capabilities */
> 2409         if (sc->ah->ah_version == AR5K_AR5212) {
> 2410                 hw->max_rates = 4;
> 2411                 hw->max_rate_tries = 11;
> 2412         }
>
>
>
> --
> GPG ID: 0xD21DB2DB
> As you read this post global entropy rises. Have Fun ;-)
> Nick
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-06  9:38 ` Nick Kossifidis
@ 2010-12-07  1:18   ` Jonathan Guerin
  0 siblings, 0 replies; 27+ messages in thread
From: Jonathan Guerin @ 2010-12-07  1:18 UTC (permalink / raw)
  To: Nick Kossifidis; +Cc: linux-wireless, ath5k-devel, Bruno Randolf

On Mon, Dec 6, 2010 at 7:38 PM, Nick Kossifidis <mickflemm@gmail.com> wrote:
> 2010/12/6 Jonathan Guerin <jonathan@guerin.id.au>:
>> Hi,
>>
>>
>> I've been doing some investigation into the behaviour of contention
>> windows and retransmissions.
>>
>> Firstly, I'll just describe the test scenario and setup that I have. I
>> have 3 Via x86 nodes with Atheros AR5001X+ cards. They are tethered to
>> each other via coaxial cables, into splitters. They have 20dB of fixed
>> attenuation applied to each antenna output, plus a programmable
>> variable attenuator on each link. One node acts as a sender, one as a
>> receiver, and one simply runs a monitor-mode interface to capture
>> packet traces. All 3 are running kernel version 2.6.37-rc2. The sender
>> and receiver are configured as IBSS stations and are tuned to 5.18
>> GHz.
>>
>> Here's a really dodgy ASCII diagram of the setup:
>>
>> S-----[variable attenuator]-----R
>> |                                         |
>> |                                         |
>> |                                         |
>> +------------M-------------------------+
>>
>> where S is the Sender node, R is the Receiver node and M is the
>> Monitoring capture node.
>>
>>
>> Secondly, I have written a program which will parse a captured pcap
>> file from the Monitoring station. It looks for 'chains' of frames with
>> the same sequence number, and where the first frame has the Retry bit
>> set to false in the header and all following have it set to true. Any
>> deviation from this, and the program drops the current chain without
>> including it in its stats, and looks for the next chain matching these
>> requirements. It averages the amount of time per transmission number
>> (i.e. the average of all transmissions which were the first, second,
>> third etc. for a unique sequence number). The transmission time of a
>> frame is the amount of time between the end of the frame and the end
>> of the previous. It tracks these 'chains' of frames with the same
>> sequence number. It considers the last transmission number in each
>> chain as the 'final' transmission.
>>
>> Finally, the link is loaded using a saturated UDP flow, and the data
>> rate is fixed to 54M and 36M. This is specified in the output. The
>> output is attached below.
>>
>> The output describes the fixed link data rate, the variable
>> attenuator's value, the delivery ratio, and the number of transmitted
>> packets/s. I've added a discussion per result set. Each line outputs
>> the transmission number, the average transmission time for this
>> number, the total number of transmissions, the number of frames which
>> ended their transmissions at this number (i.e. where the chain ended
>> its final transmission - this is equivalent to the retransmission
>> value from the Radiotap header + 1), and the average expected
>> transmission time for all that particular transmission number in all
>> chains. This is calculated using the airtime calculations from the
>> 802.11a standard, with the receipt of an ACK frame, as well as a SIFS
>> (16us), which is 28us. If the transmission did not receive an ACK, a
>> normal ACK timeout is 50 us, but ath5k appears to have this set to 25
>> us, so the value shouldn't be too far out what to expect.
>>
>
> Did you measure ACK timeout or 25 is what you get from the code ?
> Because from what we know so far hw starts counting after switching to
> RX mode so phy rx delay (25 from standard) is also added.

Unfortunately, due to the unpredictability of contention window sizes
up till now, it's very difficult to measure the ACK Timeout. I've seen
this value from the code itself. If, as you say, the phy_rx_delay is
also added, then the value in the code should be correct, which means
I am using the wrong values. Perhaps these assumptions should be
documented when the value is being set up?

>
>
>
> --
> GPG ID: 0xD21DB2DB
> As you read this post global entropy rises. Have Fun ;-)
> Nick
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-06 18:01   ` Björn Smedman
@ 2010-12-07  1:19     ` Jonathan Guerin
  0 siblings, 0 replies; 27+ messages in thread
From: Jonathan Guerin @ 2010-12-07  1:19 UTC (permalink / raw)
  To: Björn Smedman; +Cc: Bruno Randolf, linux-wireless, ath5k-devel, nbd

2010/12/7 Björn Smedman <bjorn.smedman@venatech.se>:
> On Mon, Dec 6, 2010 at 9:14 AM, Bruno Randolf <br1@einfach.org> wrote:
> [snip]
>>
>> Other examples:
>>
>> *** txdesc tries 2
>> *** mrr 0 tries 9 rate 12
>> *** mrr 1 tries 2 rate 13
>> *** mrr 2 tries 3 rate 11
>>
>> = 16 transmissions in sum.
>>
>> *** txdesc tries 9
>> *** mrr 0 tries 3 rate 11
>> *** mrr 1 tries 9 rate 8
>> *** mrr 2 tries 3 rate 11
>>
>> = 24 transmissions in sum. Again, rate[1] and rate[3] are the same, so why
>> bother setting it up twice?
>
> I remember from experimenting with rate control in madwifi that weird
> stuff can happens when you go above ATH_TXMAXTRY = 13 tx attempts in
> total (all mrr segments combined). We thought we saw some significant
> improvement on poor links when we reduced retries to fit the whole mrr
> chain into 13 retries in total, but we didn't have the equipment to
> really verify that. Perhaps something you could try Jonathan in your
> excellent testbed?

I'd be happy to test this if you give me the exact scenario you would
like me to run. Incidentally, I didn't set up this testbed, my
colleague Konstanty did. You can read about it here:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5562247 and a
copy of the PDF is accessible from my Dropbox:
http://dl.dropbox.com/u/2451223/3836_publication_Design_of__2932.pdf.

Cheers,

Jonathan

>
> /Björn
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [ath5k-devel] ath5k: Weird Retransmission Behaviour
  2010-12-06 10:53     ` Sedat Dilek
@ 2010-12-07  2:29       ` Bruno Randolf
  0 siblings, 0 replies; 27+ messages in thread
From: Bruno Randolf @ 2010-12-07  2:29 UTC (permalink / raw)
  To: sedat.dilek; +Cc: Nick Kossifidis, Jonathan Guerin, ath5k-devel, linux-wireless

On Mon December 6 2010 19:53:49 Sedat Dilek wrote:
> >> But it seems weird that there are so many retransmissions. The default
> >> maximum numbers of retransmissions should be 7 for short frames and 4
> >> for long frames (dot11[Short|Long]RetryLimit), and this is what is set
> >> as defaults in mac80211 (local->hw.conf.short_frame_max_tx_count).
> >> Seems we are getting many
> > 
> >> retransmissions from minstel, i added some debug prints:
> > When ath5k doesn't get retry limits from above it uses the following
> > defaults on dcu.
> > For now i don't think we use local->hw.conf.short_frame_max_tx_count
> > for that so the
> > default is ah_limit_tx_retries (AR5K_INIT_TX_RETRY) but seems it's
> > wrong and we should
> > fix it...
> > 
> > /* Tx retry limits */
> > #define AR5K_INIT_SH_RETRY                      10
> > #define AR5K_INIT_LG_RETRY                      AR5K_INIT_SH_RETRY
> > /* For station mode */
> > #define AR5K_INIT_SSH_RETRY                     32
> > #define AR5K_INIT_SLG_RETRY                     AR5K_INIT_SSH_RETRY
> > #define AR5K_INIT_TX_RETRY                      10

> You mean sth. like the attached patch?

Not quite. We should get the values from mac80211 and use them.

At least this does explain the resetting of the contention window after 10.

bruno

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-07  1:12   ` Jonathan Guerin
@ 2010-12-07  2:34     ` Bruno Randolf
  0 siblings, 0 replies; 27+ messages in thread
From: Bruno Randolf @ 2010-12-07  2:34 UTC (permalink / raw)
  To: Jonathan Guerin; +Cc: linux-wireless, ath5k-devel, nbd

On Tue December 7 2010 10:12:18 Jonathan Guerin wrote:
> > Another thing that strikes me here is: why use multi rate retries if the
> > rate is all the same? (Ignore the actual value of the rate, this is the
> > HW rate code).
> > 
> > Other examples:
> > 
> > *** txdesc tries 2
> > *** mrr 0 tries 9 rate 12
> > *** mrr 1 tries 2 rate 13
> > *** mrr 2 tries 3 rate 11
> > 
> > = 16 transmissions in sum.
> > 
> > *** txdesc tries 9
> > *** mrr 0 tries 3 rate 11
> > *** mrr 1 tries 9 rate 8
> > *** mrr 2 tries 3 rate 11
> > 
> > = 24 transmissions in sum. Again, rate[1] and rate[3] are the same, so
> > why bother setting it up twice?
> 
> I'm not sure if you still had a fixed rate set here - and I don't know
> 100% how minstrel works - but it could be that minstrel is trying to
> do some probing for better rates (if it was set to auto-rate)?

I did not set a fixed rate, so minstrel was probing for better rates / 
providing alternative rates.

In any case there is no reason to use the same rate twice.

bruno

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [ath5k-devel] ath5k: Weird Retransmission Behaviour
  2010-12-07  1:17     ` Jonathan Guerin
@ 2010-12-08  8:06       ` Bruno Randolf
  2010-12-08  8:12         ` Jonathan Guerin
  0 siblings, 1 reply; 27+ messages in thread
From: Bruno Randolf @ 2010-12-08  8:06 UTC (permalink / raw)
  To: Jonathan Guerin
  Cc: Nick Kossifidis, ath5k-devel, linux-wireless, bjorn.smedman, nbd

> > When ath5k doesn't get retry limits from above it uses the following
> > defaults on dcu.
> > For now i don't think we use local->hw.conf.short_frame_max_tx_count
> > for that so the
> > default is ah_limit_tx_retries (AR5K_INIT_TX_RETRY) but seems it's
> > wrong and we should
> > fix it...
> > 
> > /* Tx retry limits */
> > #define AR5K_INIT_SH_RETRY                      10
> > #define AR5K_INIT_LG_RETRY                      AR5K_INIT_SH_RETRY
> > /* For station mode */
> > #define AR5K_INIT_SSH_RETRY                     32
> > #define AR5K_INIT_SLG_RETRY                     AR5K_INIT_SSH_RETRY
> > #define AR5K_INIT_TX_RETRY                      10

I just sent a patch cleaning up this mess. Could you please check it? 
Unfortunately i didn't find way to really test re-transmissions, yet. 
Jonathan, could you give it a try in your test setup with my patch, and play 
with the numbers (just hardcode them in ath5k_hw_set_tx_retry_limits) to see 
if they actually have an effect?

As noted in my patch, this does not change the high number of retries we get 
from the rate control. That's a separate issue.

bruno

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [ath5k-devel] ath5k: Weird Retransmission Behaviour
  2010-12-08  8:06       ` Bruno Randolf
@ 2010-12-08  8:12         ` Jonathan Guerin
  0 siblings, 0 replies; 27+ messages in thread
From: Jonathan Guerin @ 2010-12-08  8:12 UTC (permalink / raw)
  To: Bruno Randolf
  Cc: Nick Kossifidis, ath5k-devel, linux-wireless, bjorn.smedman, nbd

On Wed, Dec 8, 2010 at 6:06 PM, Bruno Randolf <br1@einfach.org> wrote:
>> > When ath5k doesn't get retry limits from above it uses the following
>> > defaults on dcu.
>> > For now i don't think we use local->hw.conf.short_frame_max_tx_count
>> > for that so the
>> > default is ah_limit_tx_retries (AR5K_INIT_TX_RETRY) but seems it's
>> > wrong and we should
>> > fix it...
>> >
>> > /* Tx retry limits */
>> > #define AR5K_INIT_SH_RETRY                      10
>> > #define AR5K_INIT_LG_RETRY                      AR5K_INIT_SH_RETRY
>> > /* For station mode */
>> > #define AR5K_INIT_SSH_RETRY                     32
>> > #define AR5K_INIT_SLG_RETRY                     AR5K_INIT_SSH_RETRY
>> > #define AR5K_INIT_TX_RETRY                      10
>
> I just sent a patch cleaning up this mess. Could you please check it?
> Unfortunately i didn't find way to really test re-transmissions, yet.
> Jonathan, could you give it a try in your test setup with my patch, and play
> with the numbers (just hardcode them in ath5k_hw_set_tx_retry_limits) to see
> if they actually have an effect?

Added to my todo for tomorrow - I'll try to do it if I have some spare time!

Cheers,

Jonathan

>
> As noted in my patch, this does not change the high number of retries we get
> from the rate control. That's a separate issue.
>
> bruno
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-06  8:14 ` Bruno Randolf
                     ` (2 preceding siblings ...)
  2010-12-07  1:12   ` Jonathan Guerin
@ 2010-12-08 16:08   ` Bob Copeland
  2010-12-08 16:45     ` Bob Copeland
  2010-12-08 21:53     ` Jonathan Guerin
  3 siblings, 2 replies; 27+ messages in thread
From: Bob Copeland @ 2010-12-08 16:08 UTC (permalink / raw)
  To: Bruno Randolf; +Cc: Jonathan Guerin, linux-wireless, ath5k-devel, nbd

On Mon, Dec 6, 2010 at 3:14 AM, Bruno Randolf <br1@einfach.org> wrote:
> But it seems weird that there are so many retransmissions. The default maximum
> numbers of retransmissions should be 7 for short frames and 4 for long frames
> (dot11[Short|Long]RetryLimit), and this is what is set as defaults in mac80211
> (local->hw.conf.short_frame_max_tx_count). Seems we are getting many
> retransmissions from minstel, i added some debug prints:
>

I posted a patch for this about a week ago to linux-wireless.

AFAICT minstrel doesn't use these configuration parrameters
at all (but PID does).

-- 
Bob Copeland %% www.bobcopeland.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-08 16:08   ` Bob Copeland
@ 2010-12-08 16:45     ` Bob Copeland
  2010-12-08 16:56       ` John W. Linville
  2010-12-08 21:53     ` Jonathan Guerin
  1 sibling, 1 reply; 27+ messages in thread
From: Bob Copeland @ 2010-12-08 16:45 UTC (permalink / raw)
  To: Bruno Randolf; +Cc: Jonathan Guerin, linux-wireless, ath5k-devel, nbd

On Wed, Dec 8, 2010 at 11:08 AM, Bob Copeland <me@bobcopeland.com> wrote:
> On Mon, Dec 6, 2010 at 3:14 AM, Bruno Randolf <br1@einfach.org> wrote:
>> But it seems weird that there are so many retransmissions. The default maximum
>> numbers of retransmissions should be 7 for short frames and 4 for long frames
>> (dot11[Short|Long]RetryLimit), and this is what is set as defaults in mac80211
>> (local->hw.conf.short_frame_max_tx_count). Seems we are getting many
>> retransmissions from minstel, i added some debug prints:
>>
>
> I posted a patch for this about a week ago to linux-wireless.
>
> AFAICT minstrel doesn't use these configuration parrameters
> at all (but PID does).
>
> --
> Bob Copeland %% www.bobcopeland.com
>

Found the patch:

https://patchwork.kernel.org/patch/359722/


-- 
Bob Copeland %% www.bobcopeland.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-08 16:45     ` Bob Copeland
@ 2010-12-08 16:56       ` John W. Linville
  2010-12-08 17:06         ` Bob Copeland
  0 siblings, 1 reply; 27+ messages in thread
From: John W. Linville @ 2010-12-08 16:56 UTC (permalink / raw)
  To: Bob Copeland
  Cc: Bruno Randolf, Jonathan Guerin, linux-wireless, ath5k-devel, nbd

On Wed, Dec 08, 2010 at 11:45:39AM -0500, Bob Copeland wrote:
> On Wed, Dec 8, 2010 at 11:08 AM, Bob Copeland <me@bobcopeland.com> wrote:
> > On Mon, Dec 6, 2010 at 3:14 AM, Bruno Randolf <br1@einfach.org> wrote:
> >> But it seems weird that there are so many retransmissions. The default maximum
> >> numbers of retransmissions should be 7 for short frames and 4 for long frames
> >> (dot11[Short|Long]RetryLimit), and this is what is set as defaults in mac80211
> >> (local->hw.conf.short_frame_max_tx_count). Seems we are getting many
> >> retransmissions from minstel, i added some debug prints:
> >>
> >
> > I posted a patch for this about a week ago to linux-wireless.
> >
> > AFAICT minstrel doesn't use these configuration parrameters
> > at all (but PID does).
> >
> > --
> > Bob Copeland %% www.bobcopeland.com
> >
> 
> Found the patch:
> 
> https://patchwork.kernel.org/patch/359722/

Are you posting that for merging?  Or just for testing?

-- 
John W. Linville		Someday the world will need a hero, and you
linville@tuxdriver.com			might be all we have.  Be ready.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-08 16:56       ` John W. Linville
@ 2010-12-08 17:06         ` Bob Copeland
  2010-12-08 17:11           ` Bob Copeland
  0 siblings, 1 reply; 27+ messages in thread
From: Bob Copeland @ 2010-12-08 17:06 UTC (permalink / raw)
  To: John W. Linville
  Cc: Bruno Randolf, Jonathan Guerin, linux-wireless, ath5k-devel, nbd

On Wed, Dec 8, 2010 at 11:56 AM, John W. Linville
<linville@tuxdriver.com> wrote:
>> Found the patch:
>>
>> https://patchwork.kernel.org/patch/359722/
>
> Are you posting that for merging?  Or just for testing?

Testing -- I only compile tested it but it seemed relevant
to this thread.

-- 
Bob Copeland %% www.bobcopeland.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-08 17:06         ` Bob Copeland
@ 2010-12-08 17:11           ` Bob Copeland
  2010-12-08 17:50             ` Sedat Dilek
  0 siblings, 1 reply; 27+ messages in thread
From: Bob Copeland @ 2010-12-08 17:11 UTC (permalink / raw)
  To: John W. Linville
  Cc: Bruno Randolf, Jonathan Guerin, linux-wireless, ath5k-devel, nbd

On Wed, Dec 8, 2010 at 12:06 PM, Bob Copeland <me@bobcopeland.com> wrote:
> On Wed, Dec 8, 2010 at 11:56 AM, John W. Linville
> <linville@tuxdriver.com> wrote:
>>> Found the patch:
>>>
>>> https://patchwork.kernel.org/patch/359722/
>>
>> Are you posting that for merging?  Or just for testing?
>
> Testing -- I only compile tested it but it seemed relevant
> to this thread.

Also, I should note that the retry limits are fixed at
minstrel initialization time; I don't know if that's the
right time to take the config parameters into account
or if it should be done later (and if later, how that fits in
with minstrel's MRR strategy).

-- 
Bob Copeland %% www.bobcopeland.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-08 17:11           ` Bob Copeland
@ 2010-12-08 17:50             ` Sedat Dilek
  2010-12-08 21:36               ` Bob Copeland
  0 siblings, 1 reply; 27+ messages in thread
From: Sedat Dilek @ 2010-12-08 17:50 UTC (permalink / raw)
  To: Bob Copeland
  Cc: John W. Linville, Bruno Randolf, Jonathan Guerin, linux-wireless,
	ath5k-devel, nbd

On Wed, Dec 8, 2010 at 6:11 PM, Bob Copeland <me@bobcopeland.com> wrote:
> On Wed, Dec 8, 2010 at 12:06 PM, Bob Copeland <me@bobcopeland.com> wrote:
>> On Wed, Dec 8, 2010 at 11:56 AM, John W. Linville
>> <linville@tuxdriver.com> wrote:
>>>> Found the patch:
>>>>
>>>> https://patchwork.kernel.org/patch/359722/
>>>
>>> Are you posting that for merging?  Or just for testing?
>>
>> Testing -- I only compile tested it but it seemed relevant
>> to this thread.
>
> Also, I should note that the retry limits are fixed at
> minstrel initialization time; I don't know if that's the
> right time to take the config parameters into account
> or if it should be done later (and if later, how that fits in
> with minstrel's MRR strategy).
>
> --
> Bob Copeland %% www.bobcopeland.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-wireless" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

I have applied, compiled and loaded new mac80211 kernel-module.
What could be a good test-case?

- Sedat -

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-08 17:50             ` Sedat Dilek
@ 2010-12-08 21:36               ` Bob Copeland
  0 siblings, 0 replies; 27+ messages in thread
From: Bob Copeland @ 2010-12-08 21:36 UTC (permalink / raw)
  To: sedat.dilek
  Cc: John W. Linville, Bruno Randolf, Jonathan Guerin, linux-wireless,
	ath5k-devel, nbd

On Wed, Dec 8, 2010 at 12:50 PM, Sedat Dilek <sedat.dilek@googlemail.com> wrote:
>
> I have applied, compiled and loaded new mac80211 kernel-module.
> What could be a good test-case?

Hrm, not sure, something like this?

 - config the retry limits with iwconfig
 - bring up the interface and connect to an AP, generate traffic with
   iperf or something
 - with another wireless interface & wireshark take a packet trace
 - power off the AP
 - verify that the configured retry limits aren't exceeded in the trace
   once the AP goes away.

And do the same without the patch to see the difference.

-- 
Bob Copeland %% www.bobcopeland.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-08 16:08   ` Bob Copeland
  2010-12-08 16:45     ` Bob Copeland
@ 2010-12-08 21:53     ` Jonathan Guerin
  2010-12-09  9:21       ` Helmut Schaa
  1 sibling, 1 reply; 27+ messages in thread
From: Jonathan Guerin @ 2010-12-08 21:53 UTC (permalink / raw)
  To: Bob Copeland; +Cc: Bruno Randolf, linux-wireless, ath5k-devel, nbd

On Thu, Dec 9, 2010 at 2:08 AM, Bob Copeland <me@bobcopeland.com> wrote:
> On Mon, Dec 6, 2010 at 3:14 AM, Bruno Randolf <br1@einfach.org> wrote:
>> But it seems weird that there are so many retransmissions. The default maximum
>> numbers of retransmissions should be 7 for short frames and 4 for long frames
>> (dot11[Short|Long]RetryLimit), and this is what is set as defaults in mac80211
>> (local->hw.conf.short_frame_max_tx_count). Seems we are getting many
>> retransmissions from minstel, i added some debug prints:
>>
>
> I posted a patch for this about a week ago to linux-wireless.
>
> AFAICT minstrel doesn't use these configuration parrameters
> at all (but PID does).

I only seem to have Minstrel as the only available Rate Control
algorithm in my kernel config?

Cheers,

Jonathan

>
> --
> Bob Copeland %% www.bobcopeland.com
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-08 21:53     ` Jonathan Guerin
@ 2010-12-09  9:21       ` Helmut Schaa
  2010-12-09 12:38         ` Bob Copeland
  0 siblings, 1 reply; 27+ messages in thread
From: Helmut Schaa @ 2010-12-09  9:21 UTC (permalink / raw)
  To: Jonathan Guerin
  Cc: Bob Copeland, Bruno Randolf, linux-wireless, ath5k-devel, nbd

On Wed, Dec 8, 2010 at 10:53 PM, Jonathan Guerin <jonathan@guerin.id.au> wrote:
> I only seem to have Minstrel as the only available Rate Control
> algorithm in my kernel config?

PID is only selectable on embedded platforms:

config MAC80211_RC_PID
        bool "PID controller based rate control algorithm" if EMBEDDED

Just remove the "if EMBEDDED" from net/mac8011/Kconfig and retry.

Helmut

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-09  9:21       ` Helmut Schaa
@ 2010-12-09 12:38         ` Bob Copeland
  2010-12-09 14:34           ` Jonathan Guerin
  0 siblings, 1 reply; 27+ messages in thread
From: Bob Copeland @ 2010-12-09 12:38 UTC (permalink / raw)
  To: Helmut Schaa
  Cc: Jonathan Guerin, Bruno Randolf, linux-wireless, ath5k-devel, nbd

On Thu, Dec 09, 2010 at 10:21:34AM +0100, Helmut Schaa wrote:
> On Wed, Dec 8, 2010 at 10:53 PM, Jonathan Guerin <jonathan@guerin.id.au> wrote:
> > I only seem to have Minstrel as the only available Rate Control
> > algorithm in my kernel config?
> 
> PID is only selectable on embedded platforms:
> 
> config MAC80211_RC_PID
>         bool "PID controller based rate control algorithm" if EMBEDDED
> 
> Just remove the "if EMBEDDED" from net/mac8011/Kconfig and retry.

For what it's worth, I tested pid and minstrel a while ago with a
modified mac80211_hwsim, and found minstrel to be quite a bit better
at rate adaptation.  So while it may be worth testing out for this
particular use case, minstrel is probably the way to go in general.

-- 
Bob Copeland %% www.bobcopeland.com


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-09 12:38         ` Bob Copeland
@ 2010-12-09 14:34           ` Jonathan Guerin
  2010-12-09 17:00             ` Bob Copeland
  0 siblings, 1 reply; 27+ messages in thread
From: Jonathan Guerin @ 2010-12-09 14:34 UTC (permalink / raw)
  To: Bob Copeland
  Cc: Helmut Schaa, Bruno Randolf, linux-wireless, ath5k-devel, nbd

On Thu, Dec 9, 2010 at 10:38 PM, Bob Copeland <me@bobcopeland.com> wrote:
> On Thu, Dec 09, 2010 at 10:21:34AM +0100, Helmut Schaa wrote:
>> On Wed, Dec 8, 2010 at 10:53 PM, Jonathan Guerin <jonathan@guerin.id.au> wrote:
>> > I only seem to have Minstrel as the only available Rate Control
>> > algorithm in my kernel config?
>>
>> PID is only selectable on embedded platforms:
>>
>> config MAC80211_RC_PID
>>         bool "PID controller based rate control algorithm" if EMBEDDED
>>
>> Just remove the "if EMBEDDED" from net/mac8011/Kconfig and retry.
>
> For what it's worth, I tested pid and minstrel a while ago with a
> modified mac80211_hwsim, and found minstrel to be quite a bit better
> at rate adaptation.  So while it may be worth testing out for this
> particular use case, minstrel is probably the way to go in general.

So, while I say this with no idea how to do it, might it be worth
fixing Minstrel so that it adheres to mac80211's maximum retry values?

Cheers,

Jonathan
>
> --
> Bob Copeland %% www.bobcopeland.com
>
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-09 14:34           ` Jonathan Guerin
@ 2010-12-09 17:00             ` Bob Copeland
  2010-12-09 22:41               ` Jonathan Guerin
  0 siblings, 1 reply; 27+ messages in thread
From: Bob Copeland @ 2010-12-09 17:00 UTC (permalink / raw)
  To: Jonathan Guerin
  Cc: Helmut Schaa, Bruno Randolf, linux-wireless, ath5k-devel, nbd

On Fri, Dec 10, 2010 at 12:34:19AM +1000, Jonathan Guerin wrote:
> > For what it's worth, I tested pid and minstrel a while ago with a
> > modified mac80211_hwsim, and found minstrel to be quite a bit better
> > at rate adaptation.  So while it may be worth testing out for this
> > particular use case, minstrel is probably the way to go in general.
> 
> So, while I say this with no idea how to do it, might it be worth
> fixing Minstrel so that it adheres to mac80211's maximum retry values?
 
Yeah, so that's what the linked patch I posted tries to do :)

-- 
Bob Copeland %% www.bobcopeland.com


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ath5k: Weird Retransmission Behaviour
  2010-12-09 17:00             ` Bob Copeland
@ 2010-12-09 22:41               ` Jonathan Guerin
  0 siblings, 0 replies; 27+ messages in thread
From: Jonathan Guerin @ 2010-12-09 22:41 UTC (permalink / raw)
  To: Bob Copeland
  Cc: Helmut Schaa, Bruno Randolf, linux-wireless, ath5k-devel, nbd

On Fri, Dec 10, 2010 at 3:00 AM, Bob Copeland <me@bobcopeland.com> wrote:
> On Fri, Dec 10, 2010 at 12:34:19AM +1000, Jonathan Guerin wrote:
>> > For what it's worth, I tested pid and minstrel a while ago with a
>> > modified mac80211_hwsim, and found minstrel to be quite a bit better
>> > at rate adaptation.  So while it may be worth testing out for this
>> > particular use case, minstrel is probably the way to go in general.
>>
>> So, while I say this with no idea how to do it, might it be worth
>> fixing Minstrel so that it adheres to mac80211's maximum retry values?
>
> Yeah, so that's what the linked patch I posted tries to do :)

Doh! I didn't read it properly - I assumed it was for PID... My bad.

Cheers,

Jonathan

>
> --
> Bob Copeland %% www.bobcopeland.com
>
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2010-12-09 22:42 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-12-06  6:30 ath5k: Weird Retransmission Behaviour Jonathan Guerin
2010-12-06  8:14 ` Bruno Randolf
2010-12-06  9:36   ` [ath5k-devel] " Nick Kossifidis
2010-12-06 10:53     ` Sedat Dilek
2010-12-07  2:29       ` Bruno Randolf
2010-12-07  1:17     ` Jonathan Guerin
2010-12-08  8:06       ` Bruno Randolf
2010-12-08  8:12         ` Jonathan Guerin
2010-12-06 18:01   ` Björn Smedman
2010-12-07  1:19     ` Jonathan Guerin
2010-12-07  1:12   ` Jonathan Guerin
2010-12-07  2:34     ` Bruno Randolf
2010-12-08 16:08   ` Bob Copeland
2010-12-08 16:45     ` Bob Copeland
2010-12-08 16:56       ` John W. Linville
2010-12-08 17:06         ` Bob Copeland
2010-12-08 17:11           ` Bob Copeland
2010-12-08 17:50             ` Sedat Dilek
2010-12-08 21:36               ` Bob Copeland
2010-12-08 21:53     ` Jonathan Guerin
2010-12-09  9:21       ` Helmut Schaa
2010-12-09 12:38         ` Bob Copeland
2010-12-09 14:34           ` Jonathan Guerin
2010-12-09 17:00             ` Bob Copeland
2010-12-09 22:41               ` Jonathan Guerin
2010-12-06  9:38 ` Nick Kossifidis
2010-12-07  1:18   ` Jonathan Guerin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).