* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
[not found] ` <20080721143023.GA32451@elte.hu>
@ 2008-07-21 15:10 ` David Miller
[not found] ` <20080721150446.GA17746@elte.hu>
1 sibling, 0 replies; 83+ messages in thread
From: David Miller @ 2008-07-21 15:10 UTC (permalink / raw)
To: mingo; +Cc: torvalds, akpm, netdev, linux-kernel, linux-wireless
RnJvbTogSW5nbyBNb2xuYXIgPG1pbmdvQGVsdGUuaHU+DQpEYXRlOiBNb24sIDIxIEp1bCAyMDA4
IDE2OjMwOjIzICswMjAwDQoNCj4gDQo+ICogSW5nbyBNb2xuYXIgPG1pbmdvQGVsdGUuaHU+IHdy
b3RlOg0KPiANCj4gPiBQaWQ6IDEsIGNvbW06IHN3YXBwZXIgTm90IHRhaW50ZWQgMi42LjI2LXRp
cC0wMDAxMy1nNmRlMTVjNi1kaXJ0eSAjMjEyOTANCj4gDQo+IHNvbWUgbW9yZSBpbmZvcm1hdGlv
bjogZmluZCBiZWxvdyB0aGUgc2FtZSBjcmFzaCB3aXRoIHZhbmlsbGEgDQo+IGxpbnVzL21hc3Rl
ciBhbmQgbm8gZXh0cmEgcGF0Y2hlcy4gVGhlIGNyYXNoIHNpdGUgaXM6DQoNCkpvaGFubmVzL3dp
cmVsZXNzLWZvbGtzLCBjYW4geW91IHRha2UgYSBsb29rIGF0IHRoaXM/DQoNClRoYW5rcy4NCg0K
PiANCj4gKGdkYikgbGlzdCAqMHhmZmZmZmZmZjgwOGJlMGMyDQo+IDB4ZmZmZmZmZmY4MDhiZTBj
MiBpcyBpbiByb2xsYmFja19yZWdpc3RlcmVkIChuZXQvY29yZS9kZXYuYzozNzkzKS4NCj4gMzc4
OCAgICB7DQo+IDM3ODkgICAgICAgICAgICBCVUdfT04oZGV2X2Jvb3RfcGhhc2UpOw0KPiAzNzkw
ICAgICAgICAgICAgQVNTRVJUX1JUTkwoKTsNCj4gMzc5MQ0KPiAzNzkyICAgICAgICAgICAgLyog
U29tZSBkZXZpY2VzIGNhbGwgd2l0aG91dCByZWdpc3RlcmluZyBmb3IgaW5pdGlhbGl6YXRpb24g
dW53aW5kLiAqLw0KPiAzNzkzICAgICAgICAgICAgaWYgKGRldi0+cmVnX3N0YXRlID09IE5FVFJF
R19VTklOSVRJQUxJWkVEKSB7DQo+IDM3OTQgICAgICAgICAgICAgICAgICAgIHByaW50ayhLRVJO
X0RFQlVHICJ1bnJlZ2lzdGVyX25ldGRldmljZTogZGV2aWNlICVzLyVwIG5ldmVyICINCj4gMzc5
NSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIndhcyByZWdpc3RlcmVkXG4i
LCBkZXYtPm5hbWUsIGRldik7DQo+IDM3OTYNCj4gMzc5NyAgICAgICAgICAgICAgICAgICAgV0FS
Tl9PTigxKTsNCj4gKGdkYikNCj4gDQo+IFRoYW5rcywNCj4gDQo+IAlJbmdvDQo+IA0KPiAtLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLT4NCj4gTGludXggdmVyc2lvbiAyLjYuMjYt
MDUyNTMtZzE0YjM5NWUgKG1pbmdvQGRpb25lKSAoZ2NjIHZlcnNpb24gNC4yLjMpICMyMTMwOCBT
TVAgTW9uIEp1bCAyMSAxNjoxNDo1MSBDRVNUIDIwMDgNCj4gQ29tbWFuZCBsaW5lOiByb290PS9k
ZXYvc2RhMSBlYXJseXByaW50az12Z2EgY29uc29sZT10dHlTMCwxMTUyMDAgY29uc29sZT10dHkg
NSBwcm9maWxlPTAgZGVidWcgaW5pdGNhbGxfZGVidWcgYXBpYz1kZWJ1ZyBhcGljPXZlcmJvc2Ug
aWdub3JlX2xvZ2xldmVsIHN5c3JxX2Fsd2F5c19lbmFibGVkIHBjaT1ub21zaQ0KPiBCSU9TLXBy
b3ZpZGVkIHBoeXNpY2FsIFJBTSBtYXA6DQo+ICBCSU9TLWU4MjA6IDAwMDAwMDAwMDAwMDAwMDAg
LSAwMDAwMDAwMDAwMDlmYzAwICh1c2FibGUpDQo+ICBCSU9TLWU4MjA6IDAwMDAwMDAwMDAwOWZj
MDAgLSAwMDAwMDAwMDAwMGEwMDAwIChyZXNlcnZlZCkNCj4gIEJJT1MtZTgyMDogMDAwMDAwMDAw
MDBlMDAwMCAtIDAwMDAwMDAwMDAxMDAwMDAgKHJlc2VydmVkKQ0KPiAgQklPUy1lODIwOiAwMDAw
MDAwMDAwMTAwMDAwIC0gMDAwMDAwMDAzZWQ5NDAwMCAodXNhYmxlKQ0KPiAgQklPUy1lODIwOiAw
MDAwMDAwMDNlZDk0MDAwIC0gMDAwMDAwMDAzZWU0ZTAwMCAoQUNQSSBOVlMpDQo+ICBCSU9TLWU4
MjA6IDAwMDAwMDAwM2VlNGUwMDAgLSAwMDAwMDAwMDNmZWEyMDAwICh1c2FibGUpDQo+ICBCSU9T
LWU4MjA6IDAwMDAwMDAwM2ZlYTIwMDAgLSAwMDAwMDAwMDNmZWU5MDAwIChBQ1BJIE5WUykNCj4g
IEJJT1MtZTgyMDogMDAwMDAwMDAzZmVlOTAwMCAtIDAwMDAwMDAwM2ZlZWQwMDAgKHVzYWJsZSkN
Cj4gIEJJT1MtZTgyMDogMDAwMDAwMDAzZmVlZDAwMCAtIDAwMDAwMDAwM2ZlZmYwMDAgKEFDUEkg
ZGF0YSkNCj4gIEJJT1MtZTgyMDogMDAwMDAwMDAzZmVmZjAwMCAtIDAwMDAwMDAwM2ZmMDAwMDAg
KHVzYWJsZSkNCj4gS0VSTkVMIHN1cHBvcnRlZCBjcHVzOg0KPiAgIEludGVsIEdlbnVpbmVJbnRl
bA0KPiAgIEFNRCBBdXRoZW50aWNBTUQNCj4gICBDZW50YXVyIENlbnRhdXJIYXVscw0KPiBjb25z
b2xlIFtlYXJseXZnYTBdIGVuYWJsZWQNCj4gZGVidWc6IGlnbm9yaW5nIGxvZ2xldmVsIHNldHRp
bmcuDQo+IGxhc3RfcGZuID0gMHgzZmYwMCBtYXhfYXJjaF9wZm4gPSAweDNmZmZmZmZmZg0KPiBp
bml0X21lbW9yeV9tYXBwaW5nDQo+ICAwMDAwMDAwMDAwIC0gMDAzZmUwMDAwMCBwYWdlIDJNDQo+
ICAwMDNmZTAwMDAwIC0gMDAzZmYwMDAwMCBwYWdlIDRrDQo+IGtlcm5lbCBkaXJlY3QgbWFwcGlu
ZyB0YWJsZXMgdXAgdG8gM2ZmMDAwMDAgQCA4MDAwLWIwMDANCj4gbGFzdF9tYXBfYWRkcjogM2Zm
MDAwMDAgZW5kOiAzZmYwMDAwMA0KPiBBQ1BJOiBSU0RQIDAwMEZFMDIwLCAwMDE0IChyMCBJTlRF
TCApDQo+IEFDUEk6IFJTRFQgM0ZFRkRFNDgsIDAwNTAgKHIxIElOVEVMICBEOTc1WEJYICAgICAg
IDRCOSBNU0ZUICAxMDAwMDEzKQ0KPiBBQ1BJOiBGQUNQIDNGRUZDRjEwLCAwMDc0IChyMSBJTlRF
TCAgRDk3NVhCWCAgICAgICA0QjkgTVNGVCAgMTAwMDAxMykNCj4gQUNQSTogRFNEVCAzRkVGODAx
MCwgM0U3MCAocjEgSU5URUwgIEQ5NzVYQlggICAgICAgNEI5IE1TRlQgIDEwMDAwMTMpDQo+IEFD
UEk6IEZBQ1MgM0ZFREZDNDAsIDAwNDANCj4gQUNQSTogQVBJQyAzRkVGQ0UxMCwgMDA3OCAocjEg
SU5URUwgIEQ5NzVYQlggICAgICAgNEI5IE1TRlQgIDEwMDAwMTMpDQo+IEFDUEk6IFdERFQgM0ZF
RjdGOTAsIDAwNDAgKHIxIElOVEVMICBEOTc1WEJYICAgICAgIDRCOSBNU0ZUICAxMDAwMDEzKQ0K
PiBBQ1BJOiBNQ0ZHIDNGRUY3RjEwLCAwMDNDIChyMSBJTlRFTCAgRDk3NVhCWCAgICAgICA0Qjkg
TVNGVCAgMTAwMDAxMykNCj4gQUNQSTogQVNGISAzRkVGQ0QxMCwgMDBBNiAocjMyIElOVEVMICBE
OTc1WEJYICAgICAgIDRCOSBNU0ZUICAxMDAwMDEzKQ0KPiBBQ1BJOiBIUEVUIDNGRUY3RTkwLCAw
MDM4IChyMSBJTlRFTCAgRDk3NVhCWCAgICAgICA0QjkgTVNGVCAgMTAwMDAxMykNCj4gQUNQSTog
U1NEVCAzRkVGREMxMCwgMDFCQyAocjEgSU5URUwgICAgIENwdVBtICAgICAgNEI5IE1TRlQgIDEw
MDAwMTMpDQo+IEFDUEk6IFNTRFQgM0ZFRkRBMTAsIDAxQjcgKHIxIElOVEVMICAgQ3B1MElzdCAg
ICAgIDRCOSBNU0ZUICAxMDAwMDEzKQ0KPiBBQ1BJOiBTU0RUIDNGRUZEODEwLCAwMUI3IChyMSBJ
TlRFTCAgIENwdTFJc3QgICAgICA0QjkgTVNGVCAgMTAwMDAxMykNCj4gQUNQSTogU1NEVCAzRkVG
RDYxMCwgMDFCNyAocjEgSU5URUwgICBDcHUySXN0ICAgICAgNEI5IE1TRlQgIDEwMDAwMTMpDQo+
IEFDUEk6IFNTRFQgM0ZFRkQ0MTAsIDAxQjcgKHIxIElOVEVMICAgQ3B1M0lzdCAgICAgIDRCOSBN
U0ZUICAxMDAwMDEzKQ0KPiBFbnRlcmluZyBhZGRfYWN0aXZlX3JhbmdlKDAsIDB4MCwgMHg5Zikg
MCBlbnRyaWVzIG9mIDI1NjAwIHVzZWQNCj4gRW50ZXJpbmcgYWRkX2FjdGl2ZV9yYW5nZSgwLCAw
eDEwMCwgMHgzZWQ5NCkgMSBlbnRyaWVzIG9mIDI1NjAwIHVzZWQNCj4gRW50ZXJpbmcgYWRkX2Fj
dGl2ZV9yYW5nZSgwLCAweDNlZTRlLCAweDNmZWEyKSAyIGVudHJpZXMgb2YgMjU2MDAgdXNlZA0K
PiBFbnRlcmluZyBhZGRfYWN0aXZlX3JhbmdlKDAsIDB4M2ZlZTksIDB4M2ZlZWQpIDMgZW50cmll
cyBvZiAyNTYwMCB1c2VkDQo+IEVudGVyaW5nIGFkZF9hY3RpdmVfcmFuZ2UoMCwgMHgzZmVmZiwg
MHgzZmYwMCkgNCBlbnRyaWVzIG9mIDI1NjAwIHVzZWQNCj4gKDUgZWFybHkgcmVzZXJ2YXRpb25z
KSA9PT4gYm9vdG1lbQ0KPiAgICMwIFswMDAwMDAwMDAwIC0gMDAwMDAwMTAwMF0gICBCSU9TIGRh
dGEgcGFnZSA9PT4gWzAwMDAwMDAwMDAgLSAwMDAwMDAxMDAwXQ0KPiAgICMxIFswMDAwMDA2MDAw
IC0gMDAwMDAwODAwMF0gICAgICAgVFJBTVBPTElORSA9PT4gWzAwMDAwMDYwMDAgLSAwMDAwMDA4
MDAwXQ0KPiAgICMyIFswMDAwMjAwMDAwIC0gMDAwMTQ4NGIzNF0gICAgVEVYVCBEQVRBIEJTUyA9
PT4gWzAwMDAyMDAwMDAgLSAwMDAxNDg0YjM0XQ0KPiAgICMzIFswMDAwMDlmYzAwIC0gMDAwMDEw
MDAwMF0gICAgQklPUyByZXNlcnZlZCA9PT4gWzAwMDAwOWZjMDAgLSAwMDAwMTAwMDAwXQ0KPiAg
ICM0IFswMDAwMDA4MDAwIC0gMDAwMDAwOTAwMF0gICAgICAgICAgUEdUQUJMRSA9PT4gWzAwMDAw
MDgwMDAgLSAwMDAwMDA5MDAwXQ0KPiBab25lIFBGTiByYW5nZXM6DQo+ICAgRE1BICAgICAgMHgw
MDAwMDAwMCAtPiAweDAwMDAxMDAwDQo+ICAgRE1BMzIgICAgMHgwMDAwMTAwMCAtPiAweDAwMTAw
MDAwDQo+ICAgTm9ybWFsICAgMHgwMDEwMDAwMCAtPiAweDAwMTAwMDAwDQo+IE1vdmFibGUgem9u
ZSBzdGFydCBQRk4gZm9yIGVhY2ggbm9kZQ0KPiBlYXJseV9ub2RlX21hcFs1XSBhY3RpdmUgUEZO
IHJhbmdlcw0KPiAgICAgMDogMHgwMDAwMDAwMCAtPiAweDAwMDAwMDlmDQo+ICAgICAwOiAweDAw
MDAwMTAwIC0+IDB4MDAwM2VkOTQNCj4gICAgIDA6IDB4MDAwM2VlNGUgLT4gMHgwMDAzZmVhMg0K
PiAgICAgMDogMHgwMDAzZmVlOSAtPiAweDAwMDNmZWVkDQo+ICAgICAwOiAweDAwMDNmZWZmIC0+
IDB4MDAwM2ZmMDANCj4gT24gbm9kZSAwIHRvdGFscGFnZXM6IDI2MTUxNg0KPiAgIERNQSB6b25l
OiA1NiBwYWdlcyB1c2VkIGZvciBtZW1tYXANCj4gICBETUEgem9uZTogMTAwIHBhZ2VzIHJlc2Vy
dmVkDQo+ICAgRE1BIHpvbmU6IDM4NDMgcGFnZXMsIExJRk8gYmF0Y2g6MA0KPiAgIERNQTMyIHpv
bmU6IDM1MjUgcGFnZXMgdXNlZCBmb3IgbWVtbWFwDQo+ICAgRE1BMzIgem9uZTogMjUzOTkyIHBh
Z2VzLCBMSUZPIGJhdGNoOjMxDQo+ICAgTm9ybWFsIHpvbmU6IDAgcGFnZXMgdXNlZCBmb3IgbWVt
bWFwDQo+ICAgTW92YWJsZSB6b25lOiAwIHBhZ2VzIHVzZWQgZm9yIG1lbW1hcA0KPiBBQ1BJOiBM
b2NhbCBBUElDIGFkZHJlc3MgMHhmZWUwMDAwMA0KPiBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAx
XSBsYXBpY19pZFsweDAwXSBlbmFibGVkKQ0KPiBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAyXSBs
YXBpY19pZFsweDAxXSBlbmFibGVkKQ0KPiBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAzXSBsYXBp
Y19pZFsweDgyXSBkaXNhYmxlZCkNCj4gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNf
aWRbMHg4M10gZGlzYWJsZWQpDQo+IEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweDAxXSBkZmwg
ZGZsIGxpbnRbMHgxXSkNCj4gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4MDJdIGRmbCBkZmwg
bGludFsweDFdKQ0KPiBBQ1BJOiBJT0FQSUMgKGlkWzB4MDJdIGFkZHJlc3NbMHhmZWMwMDAwMF0g
Z3NpX2Jhc2VbMF0pDQo+IElPQVBJQ1swXTogYXBpY19pZCAyLCB2ZXJzaW9uIDAsIGFkZHJlc3Mg
MHhmZWMwMDAwMCwgR1NJIDAtMjMNCj4gQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEg
MCBnbG9iYWxfaXJxIDIgZGZsIGRmbCkNCj4gQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19p
cnEgOSBnbG9iYWxfaXJxIDkgaGlnaCBsZXZlbCkNCj4gQUNQSTogSVJRMCB1c2VkIGJ5IG92ZXJy
aWRlLg0KPiBBQ1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuDQo+IEFDUEk6IElSUTkgdXNlZCBi
eSBvdmVycmlkZS4NCj4gU2V0dGluZyBBUElDIHJvdXRpbmcgdG8gZmxhdA0KPiBBQ1BJOiBIUEVU
IGlkOiAweDgwODZhMjAxIGJhc2U6IDB4ZmVkMDAwMDANCj4gU01QOiBBbGxvd2luZyA0IENQVXMs
IDIgaG90cGx1ZyBDUFVzDQo+IG1hcHBlZCBBUElDIHRvIGZmZmZmZmZmZmY1ZmMwMDAgKCAgICAg
ICAgZmVlMDAwMDApDQo+IG1hcHBlZCBJT0FQSUMgdG8gZmZmZmZmZmZmZjVmYjAwMCAoMDAwMDAw
MDBmZWMwMDAwMCkNCj4gQWxsb2NhdGluZyBQQ0kgcmVzb3VyY2VzIHN0YXJ0aW5nIGF0IDQwMDAw
MDAwIChnYXA6IDNmZjAwMDAwOmMwMTAwMDAwKQ0KPiBQRVJDUFU6IEFsbG9jYXRpbmcgNDAyNTYg
Ynl0ZXMgb2YgcGVyIGNwdSBkYXRhDQo+IE5SX0NQVVM6IDQwOTYsIG5yX2NwdV9pZHM6IDQsIG5y
X25vZGVfaWRzIDUxMg0KPiBCdWlsdCAxIHpvbmVsaXN0cyBpbiBab25lIG9yZGVyLCBtb2JpbGl0
eSBncm91cGluZyBvbi4gIFRvdGFsIHBhZ2VzOiAyNTc4MzUNCj4gS2VybmVsIGNvbW1hbmQgbGlu
ZTogcm9vdD0vZGV2L3NkYTEgZWFybHlwcmludGs9dmdhIGNvbnNvbGU9dHR5UzAsMTE1MjAwIGNv
bnNvbGU9dHR5IDUgcHJvZmlsZT0wIGRlYnVnIGluaXRjYWxsX2RlYnVnIGFwaWM9ZGVidWcgYXBp
Yz12ZXJib3NlIGlnbm9yZV9sb2dsZXZlbCBzeXNycV9hbHdheXNfZW5hYmxlZCBwY2k9bm9tc2kN
Cj4ga2VybmVsIHByb2ZpbGluZyBlbmFibGVkIChzaGlmdDogMCkNCj4gZGVidWc6IHN5c3JxIGFs
d2F5cyBlbmFibGVkLg0KPiBJbml0aWFsaXppbmcgQ1BVIzANCj4gUElEIGhhc2ggdGFibGUgZW50
cmllczogNDA5NiAob3JkZXI6IDEyLCAzMjc2OCBieXRlcykNCj4gVFNDIGNhbGlicmF0ZWQgYWdh
aW5zdCBQSVQNCj4gRGV0ZWN0ZWQgMjkzMy40MDYgTUh6IHByb2Nlc3Nvci4NCj4gQ29uc29sZTog
Y29sb3VyIFZHQSsgODB4MjUNCj4gY29uc29sZSBoYW5kb3ZlcjogYm9vdCBbZWFybHl2Z2EwXSAt
PiByZWFsIFt0dHkwXQ0KPiBjb25zb2xlIFt0dHlTMF0gZW5hYmxlZA0KPiBEZW50cnkgY2FjaGUg
aGFzaCB0YWJsZSBlbnRyaWVzOiAxMzEwNzIgKG9yZGVyOiA4LCAxMDQ4NTc2IGJ5dGVzKQ0KPiBJ
bm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDY1NTM2IChvcmRlcjogNywgNTI0Mjg4IGJ5
dGVzKQ0KPiBDaGVja2luZyBhcGVydHVyZS4uLg0KPiBObyBBR1AgYnJpZGdlIGZvdW5kDQo+IE1l
bW9yeTogOTc4MDIway8xMDQ3NTUyayBhdmFpbGFibGUgKDgwNzFrIGtlcm5lbCBjb2RlLCA2NzQ3
NmsgcmVzZXJ2ZWQsIDY4MjJrIGRhdGEsIDU0MGsgaW5pdCkNCj4gQ1BBOiBwYWdlIHBvb2wgaW5p
dGlhbGl6ZWQgMSBvZiAxIHBhZ2VzIHByZWFsbG9jYXRlZA0KPiBocGV0IGNsb2NrZXZlbnQgcmVn
aXN0ZXJlZA0KPiBDYWxpYnJhdGluZyBkZWxheSBsb29wIChza2lwcGVkKSwgdmFsdWUgY2FsY3Vs
YXRlZCB1c2luZyB0aW1lciBmcmVxdWVuY3kuLiA8Nj41ODY2LjgxIEJvZ29NSVBTIChscGo9MTE3
MzM2MjQpDQo+IFNlY3VyaXR5IEZyYW1ld29yayBpbml0aWFsaXplZA0KPiBTRUxpbnV4OiAgSW5p
dGlhbGl6aW5nLg0KPiBTRUxpbnV4OiAgU3RhcnRpbmcgaW4gcGVybWlzc2l2ZSBtb2RlDQo+IE1v
dW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMjU2DQo+IENQVTogTDEgSSBjYWNoZTogMzJL
LCBMMSBEIGNhY2hlOiAzMksNCj4gQ1BVOiBMMiBjYWNoZTogNDA5NksNCj4gQ1BVOiBQaHlzaWNh
bCBQcm9jZXNzb3IgSUQ6IDANCj4gQ1BVOiBQcm9jZXNzb3IgQ29yZSBJRDogMA0KPiBDUFUwOiBU
aGVybWFsIG1vbml0b3JpbmcgZW5hYmxlZCAoVE0yKQ0KPiB1c2luZyBtd2FpdCBpbiBpZGxlIHRo
cmVhZHMuDQo+IEFDUEk6IENvcmUgcmV2aXNpb24gMjAwODA2MDkNCj4gZW5hYmxlZCBFeHRJTlQg
b24gQ1BVIzANCj4gRU5BQkxJTkcgSU8tQVBJQyBJUlFzDQo+IGluaXQgSU9fQVBJQyBJUlFzDQo+
ICBJTy1BUElDIChhcGljaWQtcGluKSAyLTAgbm90IGNvbm5lY3RlZC4NCj4gSU9BUElDWzBdOiBT
ZXQgcm91dGluZyBlbnRyeSAoMi0xIC0+IDB4MzEgLT4gSVJRIDEgTW9kZTowIEFjdGl2ZTowKQ0K
PiBJT0FQSUNbMF06IFNldCByb3V0aW5nIGVudHJ5ICgyLTIgLT4gMHgzMCAtPiBJUlEgMCBNb2Rl
OjAgQWN0aXZlOjApDQo+IElPQVBJQ1swXTogU2V0IHJvdXRpbmcgZW50cnkgKDItMyAtPiAweDMz
IC0+IElSUSAzIE1vZGU6MCBBY3RpdmU6MCkNCj4gSU9BUElDWzBdOiBTZXQgcm91dGluZyBlbnRy
eSAoMi00IC0+IDB4MzQgLT4gSVJRIDQgTW9kZTowIEFjdGl2ZTowKQ0KPiBJT0FQSUNbMF06IFNl
dCByb3V0aW5nIGVudHJ5ICgyLTUgLT4gMHgzNSAtPiBJUlEgNSBNb2RlOjAgQWN0aXZlOjApDQo+
IElPQVBJQ1swXTogU2V0IHJvdXRpbmcgZW50cnkgKDItNiAtPiAweDM2IC0+IElSUSA2IE1vZGU6
MCBBY3RpdmU6MCkNCj4gSU9BUElDWzBdOiBTZXQgcm91dGluZyBlbnRyeSAoMi03IC0+IDB4Mzcg
LT4gSVJRIDcgTW9kZTowIEFjdGl2ZTowKQ0KPiBJT0FQSUNbMF06IFNldCByb3V0aW5nIGVudHJ5
ICgyLTggLT4gMHgzOCAtPiBJUlEgOCBNb2RlOjAgQWN0aXZlOjApDQo+IElPQVBJQ1swXTogU2V0
IHJvdXRpbmcgZW50cnkgKDItOSAtPiAweDM5IC0+IElSUSA5IE1vZGU6MSBBY3RpdmU6MCkNCj4g
SU9BUElDWzBdOiBTZXQgcm91dGluZyBlbnRyeSAoMi0xMCAtPiAweDNhIC0+IElSUSAxMCBNb2Rl
OjAgQWN0aXZlOjApDQo+IElPQVBJQ1swXTogU2V0IHJvdXRpbmcgZW50cnkgKDItMTEgLT4gMHgz
YiAtPiBJUlEgMTEgTW9kZTowIEFjdGl2ZTowKQ0KPiBJT0FQSUNbMF06IFNldCByb3V0aW5nIGVu
dHJ5ICgyLTEyIC0+IDB4M2MgLT4gSVJRIDEyIE1vZGU6MCBBY3RpdmU6MCkNCj4gSU9BUElDWzBd
OiBTZXQgcm91dGluZyBlbnRyeSAoMi0xMyAtPiAweDNkIC0+IElSUSAxMyBNb2RlOjAgQWN0aXZl
OjApDQo+IElPQVBJQ1swXTogU2V0IHJvdXRpbmcgZW50cnkgKDItMTQgLT4gMHgzZSAtPiBJUlEg
MTQgTW9kZTowIEFjdGl2ZTowKQ0KPiBJT0FQSUNbMF06IFNldCByb3V0aW5nIGVudHJ5ICgyLTE1
IC0+IDB4M2YgLT4gSVJRIDE1IE1vZGU6MCBBY3RpdmU6MCkNCj4gIElPLUFQSUMgKGFwaWNpZC1w
aW4pIDItMTYsIDItMTcsIDItMTgsIDItMTksIDItMjAsIDItMjEsIDItMjIsIDItMjMgbm90IGNv
bm5lY3RlZC4NCj4gLi5USU1FUjogdmVjdG9yPTB4MzAgYXBpYzE9MCBwaW4xPTIgYXBpYzI9LTEg
cGluMj0tMQ0KPiBDUFUwOiBJbnRlbChSKSBDb3JlKFRNKTIgQ1BVICAgICAgICAgRTY4MDAgIEAg
Mi45M0dIeiBzdGVwcGluZyAwNQ0KPiBVc2luZyBsb2NhbCBBUElDIHRpbWVyIGludGVycnVwdHMu
DQo+IEFQSUMgdGltZXIgY2FsaWJyYXRpb24gcmVzdWx0IDE2NjY3MDgzDQo+IERldGVjdGVkIDE2
LjY2NyBNSHogQVBJQyB0aW1lci4NCj4gQm9vdGluZyBwcm9jZXNzb3IgMS8xIGlwIDYwMDANCj4g
SW5pdGlhbGl6aW5nIENQVSMxDQo+IG1hc2tlZCBFeHRJTlQgb24gQ1BVIzENCj4gQ2FsaWJyYXRp
bmcgZGVsYXkgdXNpbmcgdGltZXIgc3BlY2lmaWMgcm91dGluZS4uIDw2PjU4NjYuODggQm9nb01J
UFMgKGxwaj0xMTczMzc3MSkNCj4gQ1BVOiBMMSBJIGNhY2hlOiAzMkssIEwxIEQgY2FjaGU6IDMy
Sw0KPiBDUFU6IEwyIGNhY2hlOiA0MDk2Sw0KPiBDUFU6IFBoeXNpY2FsIFByb2Nlc3NvciBJRDog
MA0KPiBDUFU6IFByb2Nlc3NvciBDb3JlIElEOiAxDQo+IENQVTE6IFRoZXJtYWwgbW9uaXRvcmlu
ZyBlbmFibGVkIChUTTIpDQo+IENQVTE6IEludGVsKFIpIENvcmUoVE0pMiBDUFUgICAgICAgICBF
NjgwMCAgQCAyLjkzR0h6IHN0ZXBwaW5nIDA1DQo+IGNoZWNraW5nIFRTQyBzeW5jaHJvbml6YXRp
b24gW0NQVSMwIC0+IENQVSMxXTogcGFzc2VkLg0KPiBCcm91Z2h0IHVwIDIgQ1BVcw0KPiBUb3Rh
bCBvZiAyIHByb2Nlc3NvcnMgYWN0aXZhdGVkICgxMTczMy42OSBCb2dvTUlQUykuDQo+IGNhbGxp
bmcgIGluaXRfY3B1ZnJlcV90cmFuc2l0aW9uX25vdGlmaWVyX2xpc3QrMHgwLzB4MWINCj4gaW5p
dGNhbGwgaW5pdF9jcHVmcmVxX3RyYW5zaXRpb25fbm90aWZpZXJfbGlzdCsweDAvMHgxYiByZXR1
cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgbmV0X25zX2luaXQrMHgwLzB4MTMzDQo+
IG5ldF9uYW1lc3BhY2U6IDExMjAgYnl0ZXMNCj4gaW5pdGNhbGwgbmV0X25zX2luaXQrMHgwLzB4
MTMzIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBjcHVmcmVxX3RzYysweDAv
MHgxNg0KPiBpbml0Y2FsbCBjcHVmcmVxX3RzYysweDAvMHgxNiByZXR1cm5lZCAwIGFmdGVyIDAg
bXNlY3MNCj4gY2FsbGluZyAgaW5pdF9zbXBfZmx1c2grMHgwLzB4NTENCj4gaW5pdGNhbGwgaW5p
dF9zbXBfZmx1c2grMHgwLzB4NTEgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcg
IHByaW50X2Jhbm5lcisweDAvMHhmDQo+IEJvb3RpbmcgcGFyYXZpcnR1YWxpemVkIGtlcm5lbCBv
biBiYXJlIGhhcmR3YXJlDQo+IGluaXRjYWxsIHByaW50X2Jhbm5lcisweDAvMHhmIHJldHVybmVk
IDAgYWZ0ZXIgMyBtc2Vjcw0KPiBjYWxsaW5nICBzeXNjdGxfaW5pdCsweDAvMHgzMg0KPiBpbml0
Y2FsbCBzeXNjdGxfaW5pdCsweDAvMHgzMiByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2Fs
bGluZyAga3N5c2ZzX2luaXQrMHgwLzB4YjkNCj4gaW5pdGNhbGwga3N5c2ZzX2luaXQrMHgwLzB4
YjkgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfamlmZmllc19jbG9j
a3NvdXJjZSsweDAvMHhjDQo+IGluaXRjYWxsIGluaXRfamlmZmllc19jbG9ja3NvdXJjZSsweDAv
MHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBwbV9pbml0KzB4MC8weDM1
DQo+IGluaXRjYWxsIHBtX2luaXQrMHgwLzB4MzUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+
IGNhbGxpbmcgIGZpbGVsb2NrX2luaXQrMHgwLzB4MmUNCj4gaW5pdGNhbGwgZmlsZWxvY2tfaW5p
dCsweDAvMHgyZSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9zY3Jp
cHRfYmluZm10KzB4MC8weGMNCj4gaW5pdGNhbGwgaW5pdF9zY3JpcHRfYmluZm10KzB4MC8weGMg
cmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfZWxmX2JpbmZtdCsweDAv
MHhjDQo+IGluaXRjYWxsIGluaXRfZWxmX2JpbmZtdCsweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIg
MCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X2NvbXBhdF9lbGZfYmluZm10KzB4MC8weGMNCj4gaW5p
dGNhbGwgaW5pdF9jb21wYXRfZWxmX2JpbmZtdCsweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBt
c2Vjcw0KPiBjYWxsaW5nICBkZWJ1Z2ZzX2luaXQrMHgwLzB4NDcNCj4gaW5pdGNhbGwgZGVidWdm
c19pbml0KzB4MC8weDQ3IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBzZWN1
cml0eWZzX2luaXQrMHgwLzB4NDcNCj4gaW5pdGNhbGwgc2VjdXJpdHlmc19pbml0KzB4MC8weDQ3
IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICByYW5kb20zMl9pbml0KzB4MC8w
eDVmDQo+IGluaXRjYWxsIHJhbmRvbTMyX2luaXQrMHgwLzB4NWYgcmV0dXJuZWQgMCBhZnRlciAw
IG1zZWNzDQo+IGNhbGxpbmcgIGNwdWZyZXFfY29yZV9pbml0KzB4MC8weDg0DQo+IGluaXRjYWxs
IGNwdWZyZXFfY29yZV9pbml0KzB4MC8weDg0IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBj
YWxsaW5nICBjcHVpZGxlX2luaXQrMHgwLzB4MzYNCj4gaW5pdGNhbGwgY3B1aWRsZV9pbml0KzB4
MC8weDM2IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICB2aXJ0aW9faW5pdCsw
eDAvMHgyNg0KPiBpbml0Y2FsbCB2aXJ0aW9faW5pdCsweDAvMHgyNiByZXR1cm5lZCAwIGFmdGVy
IDAgbXNlY3MNCj4gY2FsbGluZyAgc29ja19pbml0KzB4MC8weDVlDQo+IGluaXRjYWxsIHNvY2tf
aW5pdCsweDAvMHg1ZSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgbmV0cG9s
bF9pbml0KzB4MC8weDJjDQo+IGluaXRjYWxsIG5ldHBvbGxfaW5pdCsweDAvMHgyYyByZXR1cm5l
ZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgbmV0bGlua19wcm90b19pbml0KzB4MC8weDEz
ZQ0KPiBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDE2DQo+IGluaXRjYWxsIG5ldGxp
bmtfcHJvdG9faW5pdCsweDAvMHgxM2UgcmV0dXJuZWQgMCBhZnRlciAzIG1zZWNzDQo+IGNhbGxp
bmcgIGJkaV9jbGFzc19pbml0KzB4MC8weDNkDQo+IGluaXRjYWxsIGJkaV9jbGFzc19pbml0KzB4
MC8weDNkIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBrb2JqZWN0X3VldmVu
dF9pbml0KzB4MC8weDQ1DQo+IGluaXRjYWxsIGtvYmplY3RfdWV2ZW50X2luaXQrMHgwLzB4NDUg
cmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIHBjaWJ1c19jbGFzc19pbml0KzB4
MC8weGMNCj4gaW5pdGNhbGwgcGNpYnVzX2NsYXNzX2luaXQrMHgwLzB4YyByZXR1cm5lZCAwIGFm
dGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgcGNpX2RyaXZlcl9pbml0KzB4MC8weGMNCj4gaW5pdGNh
bGwgcGNpX2RyaXZlcl9pbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNh
bGxpbmcgIGxjZF9jbGFzc19pbml0KzB4MC8weDQ5DQo+IGluaXRjYWxsIGxjZF9jbGFzc19pbml0
KzB4MC8weDQ5IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBiYWNrbGlnaHRf
Y2xhc3NfaW5pdCsweDAvMHg0YQ0KPiBpbml0Y2FsbCBiYWNrbGlnaHRfY2xhc3NfaW5pdCsweDAv
MHg0YSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgZG9ja19pbml0KzB4MC8w
eDVjDQo+IE5vIGRvY2sgZGV2aWNlcyBmb3VuZC4NCj4gaW5pdGNhbGwgZG9ja19pbml0KzB4MC8w
eDVjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICB0dHlfY2xhc3NfaW5pdCsw
eDAvMHgyYQ0KPiBpbml0Y2FsbCB0dHlfY2xhc3NfaW5pdCsweDAvMHgyYSByZXR1cm5lZCAwIGFm
dGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgdnRjb25zb2xlX2NsYXNzX2luaXQrMHgwLzB4YWYNCj4g
aW5pdGNhbGwgdnRjb25zb2xlX2NsYXNzX2luaXQrMHgwLzB4YWYgcmV0dXJuZWQgMCBhZnRlciAw
IG1zZWNzDQo+IGNhbGxpbmcgIGVuYWJsZV9wY2lfaW9fZWNzKzB4MC8weDJlDQo+IGluaXRjYWxs
IGVuYWJsZV9wY2lfaW9fZWNzKzB4MC8weDJlIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBj
YWxsaW5nICBlYXJseV9maWxsX21wX2J1c19pbmZvKzB4MC8weDc2Mw0KPiBpbml0Y2FsbCBlYXJs
eV9maWxsX21wX2J1c19pbmZvKzB4MC8weDc2MyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4g
Y2FsbGluZyAgYXJjaF9rZGVidWdmc19pbml0KzB4MC8weDMNCj4gaW5pdGNhbGwgYXJjaF9rZGVi
dWdmc19pbml0KzB4MC8weDMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGZm
aF9jc3RhdGVfaW5pdCsweDAvMHgzMQ0KPiBpbml0Y2FsbCBmZmhfY3N0YXRlX2luaXQrMHgwLzB4
MzEgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGFjcGlfcGNpX2luaXQrMHgw
LzB4M2INCj4gQUNQSTogYnVzIHR5cGUgcGNpIHJlZ2lzdGVyZWQNCj4gaW5pdGNhbGwgYWNwaV9w
Y2lfaW5pdCsweDAvMHgzYiByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5p
dF9hY3BpX2RldmljZV9ub3RpZnkrMHgwLzB4NGINCj4gaW5pdGNhbGwgaW5pdF9hY3BpX2Rldmlj
ZV9ub3RpZnkrMHgwLzB4NGIgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIHBj
aV9hcmNoX2luaXQrMHgwLzB4NDQNCj4gUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDogYmFzZSBm
MDAwMDAwMCBzZWdtZW50IDAgYnVzZXMgMCAtIDEyNw0KPiBQQ0k6IE5vdCB1c2luZyBNTUNPTkZJ
Ry4NCj4gUENJOiBVc2luZyBjb25maWd1cmF0aW9uIHR5cGUgMSBmb3IgYmFzZSBhY2Nlc3MNCj4g
aW5pdGNhbGwgcGNpX2FyY2hfaW5pdCsweDAvMHg0NCByZXR1cm5lZCAwIGFmdGVyIDExIG1zZWNz
DQo+IGNhbGxpbmcgIHRvcG9sb2d5X2luaXQrMHgwLzB4MzENCj4gaW5pdGNhbGwgdG9wb2xvZ3lf
aW5pdCsweDAvMHgzMSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgcGFyYW1f
c3lzZnNfaW5pdCsweDAvMHgxYzENCj4gaW5pdGNhbGwgcGFyYW1fc3lzZnNfaW5pdCsweDAvMHgx
YzEgcmV0dXJuZWQgMCBhZnRlciAxMSBtc2Vjcw0KPiBjYWxsaW5nICBwbV9zeXNycV9pbml0KzB4
MC8weDE5DQo+IGluaXRjYWxsIHBtX3N5c3JxX2luaXQrMHgwLzB4MTkgcmV0dXJuZWQgMCBhZnRl
ciAwIG1zZWNzDQo+IGNhbGxpbmcgIHJlYWRhaGVhZF9pbml0KzB4MC8weDJlDQo+IGluaXRjYWxs
IHJlYWRhaGVhZF9pbml0KzB4MC8weDJlIHJldHVybmVkIDAgYWZ0ZXIgMyBtc2Vjcw0KPiBjYWxs
aW5nICBpbml0X2JpbysweDAvMHhjMA0KPiBpbml0Y2FsbCBpbml0X2JpbysweDAvMHhjMCByZXR1
cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW50ZWdyaXR5X2luaXQrMHgwLzB4MzUN
Cj4gaW5pdGNhbGwgaW50ZWdyaXR5X2luaXQrMHgwLzB4MzUgcmV0dXJuZWQgMCBhZnRlciAwIG1z
ZWNzDQo+IGNhbGxpbmcgIGJsa19zZXR0aW5nc19pbml0KzB4MC8weDI1DQo+IGluaXRjYWxsIGJs
a19zZXR0aW5nc19pbml0KzB4MC8weDI1IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxs
aW5nICBibGtfaW9jX2luaXQrMHgwLzB4MmENCj4gaW5pdGNhbGwgYmxrX2lvY19pbml0KzB4MC8w
eDJhIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBnZW5oZF9kZXZpY2VfaW5p
dCsweDAvMHgzNg0KPiBpbml0Y2FsbCBnZW5oZF9kZXZpY2VfaW5pdCsweDAvMHgzNiByZXR1cm5l
ZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgYmxrX2Rldl9pbnRlZ3JpdHlfaW5pdCsweDAv
MHgyYQ0KPiBpbml0Y2FsbCBibGtfZGV2X2ludGVncml0eV9pbml0KzB4MC8weDJhIHJldHVybmVk
IDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBwY2lfc2xvdF9pbml0KzB4MC8weDQ1DQo+IGlu
aXRjYWxsIHBjaV9zbG90X2luaXQrMHgwLzB4NDUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+
IGNhbGxpbmcgIGFjcGlfaW5pdCsweDAvMHgyMWINCj4gQUNQSTogRUM6IExvb2sgdXAgRUMgaW4g
RFNEVA0KPiBBQ1BJOiBJbnRlcnByZXRlciBlbmFibGVkDQo+IEFDUEk6IChzdXBwb3J0cyBTMCBT
NSkNCj4gQUNQSTogVXNpbmcgSU9BUElDIGZvciBpbnRlcnJ1cHQgcm91dGluZw0KPiBQQ0k6IE1D
RkcgY29uZmlndXJhdGlvbiAwOiBiYXNlIGYwMDAwMDAwIHNlZ21lbnQgMCBidXNlcyAwIC0gMTI3
DQo+IFBDSTogQklPUyBCdWc6IE1DRkcgYXJlYSBhdCBmMDAwMDAwMCBpcyBub3QgcmVzZXJ2ZWQg
aW4gQUNQSSBtb3RoZXJib2FyZCByZXNvdXJjZXMNCj4gUENJOiBOb3QgdXNpbmcgTU1DT05GSUcu
DQo+IGluaXRjYWxsIGFjcGlfaW5pdCsweDAvMHgyMWIgcmV0dXJuZWQgMCBhZnRlciAyNiBtc2Vj
cw0KPiBjYWxsaW5nICBhY3BpX3NjYW5faW5pdCsweDAvMHgxMTMNCj4gaW5pdGNhbGwgYWNwaV9z
Y2FuX2luaXQrMHgwLzB4MTEzIHJldHVybmVkIDAgYWZ0ZXIgMyBtc2Vjcw0KPiBjYWxsaW5nICBh
Y3BpX2VjX2luaXQrMHgwLzB4NjENCj4gaW5pdGNhbGwgYWNwaV9lY19pbml0KzB4MC8weDYxIHJl
dHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBhY3BpX3BjaV9yb290X2luaXQrMHgw
LzB4MjgNCj4gQUNQSTogUENJIFJvb3QgQnJpZGdlIFtQQ0kwXSAoMDAwMDowMCkNCj4gcGNpIDAw
MDA6MDA6MDEuMDogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQNCj4gcGNpIDAw
MDA6MDA6MDEuMDogUE1FIyBkaXNhYmxlZA0KPiBwY2kgMDAwMDowMDoxYi4wOiBQTUUjIHN1cHBv
cnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZA0KPiBwY2kgMDAwMDowMDoxYi4wOiBQTUUjIGRpc2Fi
bGVkDQo+IHBjaSAwMDAwOjAwOjFjLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNj
b2xkDQo+IHBjaSAwMDAwOjAwOjFjLjA6IFBNRSMgZGlzYWJsZWQNCj4gcGNpIDAwMDA6MDA6MWMu
NDogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQNCj4gcGNpIDAwMDA6MDA6MWMu
NDogUE1FIyBkaXNhYmxlZA0KPiBwY2kgMDAwMDowMDoxYy41OiBQTUUjIHN1cHBvcnRlZCBmcm9t
IEQwIEQzaG90IEQzY29sZA0KPiBwY2kgMDAwMDowMDoxYy41OiBQTUUjIGRpc2FibGVkDQo+IHBj
aSAwMDAwOjAwOjFkLjc6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkDQo+IHBj
aSAwMDAwOjAwOjFkLjc6IFBNRSMgZGlzYWJsZWQNCj4gcGNpIDAwMDA6MDA6MWYuMjogUE1FIyBz
dXBwb3J0ZWQgZnJvbSBEM2hvdA0KPiBwY2kgMDAwMDowMDoxZi4yOiBQTUUjIGRpc2FibGVkDQo+
IHBjaSAwMDAwOjAxOjAwLjA6IHN1cHBvcnRzIEQxDQo+IHBjaSAwMDAwOjAxOjAwLjA6IHN1cHBv
cnRzIEQyDQo+IHBjaSAwMDAwOjAxOjAwLjE6IHN1cHBvcnRzIEQxDQo+IHBjaSAwMDAwOjAxOjAw
LjE6IHN1cHBvcnRzIEQyDQo+IHBjaSAwMDAwOjAyOjAwLjA6IHN1cHBvcnRzIEQxDQo+IHBjaSAw
MDAwOjAyOjAwLjA6IHN1cHBvcnRzIEQyDQo+IHBjaSAwMDAwOjAyOjAwLjA6IFBNRSMgc3VwcG9y
dGVkIGZyb20gRDAgRDEgRDINCj4gcGNpIDAwMDA6MDI6MDAuMDogUE1FIyBkaXNhYmxlZA0KPiBw
Y2kgMDAwMDowNDowMC4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZA0KPiBw
Y2kgMDAwMDowNDowMC4wOiBQTUUjIGRpc2FibGVkDQo+IHBjaSAwMDAwOjA1OjA0LjA6IHN1cHBv
cnRzIEQxDQo+IHBjaSAwMDAwOjA1OjA0LjA6IHN1cHBvcnRzIEQyDQo+IHBjaSAwMDAwOjA1OjA0
LjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDEgRDIgRDNob3QNCj4gcGNpIDAwMDA6MDU6MDQu
MDogUE1FIyBkaXNhYmxlZA0KPiBwY2kgMDAwMDowNTowNS4wOiBzdXBwb3J0cyBEMQ0KPiBwY2kg
MDAwMDowNTowNS4wOiBzdXBwb3J0cyBEMg0KPiBwY2kgMDAwMDowMDoxZS4wOiB0cmFuc3BhcmVu
dCBicmlkZ2UNCj4gYnVzIDAwIC0+IG5vZGUgMA0KPiBBQ1BJOiBQQ0kgSW50ZXJydXB0IFJvdXRp
bmcgVGFibGUgW1xfU0JfLlBDSTAuX1BSVF0NCj4gQUNQSTogUENJIEludGVycnVwdCBSb3V0aW5n
IFRhYmxlIFtcX1NCXy5QQ0kwLlAzMl8uX1BSVF0NCj4gQUNQSTogUENJIEludGVycnVwdCBSb3V0
aW5nIFRhYmxlIFtcX1NCXy5QQ0kwLlBFWDAuX1BSVF0NCj4gQUNQSTogUENJIEludGVycnVwdCBS
b3V0aW5nIFRhYmxlIFtcX1NCXy5QQ0kwLlBFWDQuX1BSVF0NCj4gQUNQSTogUENJIEludGVycnVw
dCBSb3V0aW5nIFRhYmxlIFtcX1NCXy5QQ0kwLlBFWDUuX1BSVF0NCj4gaW5pdGNhbGwgYWNwaV9w
Y2lfcm9vdF9pbml0KzB4MC8weDI4IHJldHVybmVkIDAgYWZ0ZXIgMTEwIG1zZWNzDQo+IGNhbGxp
bmcgIGFjcGlfcGNpX2xpbmtfaW5pdCsweDAvMHg0OA0KPiBBQ1BJOiBQQ0kgSW50ZXJydXB0IExp
bmsgW0xOS0FdIChJUlFzIDMgNCA1IDcgOSAxMCAqMTEgMTIpDQo+IEFDUEk6IFBDSSBJbnRlcnJ1
cHQgTGluayBbTE5LQl0gKElSUXMgMyA0IDUgNyA5ICoxMCAxMSAxMikNCj4gQUNQSTogUENJIElu
dGVycnVwdCBMaW5rIFtMTktDXSAoSVJRcyAzIDQgNSA3IDkgKjEwIDExIDEyKQ0KPiBBQ1BJOiBQ
Q0kgSW50ZXJydXB0IExpbmsgW0xOS0RdIChJUlFzIDMgNCA1IDcgKjkgMTAgMTEgMTIpDQo+IEFD
UEk6IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LRV0gKElSUXMgMyA0IDUgNyA5IDEwIDExIDEyKSAq
MCwgZGlzYWJsZWQuDQo+IEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGluayBbTE5LRl0gKElSUXMgMyA0
IDUgNyA5IDEwIDExIDEyKSAqMCwgZGlzYWJsZWQuDQo+IEFDUEk6IFBDSSBJbnRlcnJ1cHQgTGlu
ayBbTE5LR10gKElSUXMgMyA0IDUgNyAqOSAxMCAxMSAxMikNCj4gQUNQSTogUENJIEludGVycnVw
dCBMaW5rIFtMTktIXSAoSVJRcyAzIDQgNSA3IDkgMTAgKjExIDEyKQ0KPiBpbml0Y2FsbCBhY3Bp
X3BjaV9saW5rX2luaXQrMHgwLzB4NDggcmV0dXJuZWQgMCBhZnRlciA0OSBtc2Vjcw0KPiBjYWxs
aW5nICBhY3BpX3Bvd2VyX2luaXQrMHgwLzB4NzcNCj4gaW5pdGNhbGwgYWNwaV9wb3dlcl9pbml0
KzB4MC8weDc3IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBhY3BpX3N5c3Rl
bV9pbml0KzB4MC8weDI2NA0KPiBpbml0Y2FsbCBhY3BpX3N5c3RlbV9pbml0KzB4MC8weDI2NCBy
ZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgcG5wX2luaXQrMHgwLzB4MWYNCj4g
TGludXggUGx1ZyBhbmQgUGxheSBTdXBwb3J0IHYwLjk3IChjKSBBZGFtIEJlbGF5DQo+IGluaXRj
YWxsIHBucF9pbml0KzB4MC8weDFmIHJldHVybmVkIDAgYWZ0ZXIgMyBtc2Vjcw0KPiBjYWxsaW5n
ICBwbnBhY3BpX2luaXQrMHgwLzB4OGMNCj4gcG5wOiBQblAgQUNQSSBpbml0DQo+IEFDUEk6IGJ1
cyB0eXBlIHBucCByZWdpc3RlcmVkDQo+IElPQVBJQ1swXTogU2V0IHJvdXRpbmcgZW50cnkgKDIt
OCAtPiAweDM4IC0+IElSUSA4IE1vZGU6MCBBY3RpdmU6MCkNCj4gSU9BUElDWzBdOiBTZXQgcm91
dGluZyBlbnRyeSAoMi0xMyAtPiAweDNkIC0+IElSUSAxMyBNb2RlOjAgQWN0aXZlOjApDQo+IElP
QVBJQ1swXTogU2V0IHJvdXRpbmcgZW50cnkgKDItNiAtPiAweDM2IC0+IElSUSA2IE1vZGU6MCBB
Y3RpdmU6MCkNCj4gSU9BUElDWzBdOiBTZXQgcm91dGluZyBlbnRyeSAoMi03IC0+IDB4MzcgLT4g
SVJRIDcgTW9kZTowIEFjdGl2ZTowKQ0KPiBJT0FQSUNbMF06IFNldCByb3V0aW5nIGVudHJ5ICgy
LTEyIC0+IDB4M2MgLT4gSVJRIDEyIE1vZGU6MCBBY3RpdmU6MCkNCj4gSU9BUElDWzBdOiBTZXQg
cm91dGluZyBlbnRyeSAoMi0xIC0+IDB4MzEgLT4gSVJRIDEgTW9kZTowIEFjdGl2ZTowKQ0KPiBJ
T0FQSUNbMF06IFNldCByb3V0aW5nIGVudHJ5ICgyLTQgLT4gMHgzNCAtPiBJUlEgNCBNb2RlOjAg
QWN0aXZlOjApDQo+IHBucDogUG5QIEFDUEk6IGZvdW5kIDEzIGRldmljZXMNCj4gQUNQSTogQUNQ
SSBidXMgdHlwZSBwbnAgdW5yZWdpc3RlcmVkDQo+IGluaXRjYWxsIHBucGFjcGlfaW5pdCsweDAv
MHg4YyByZXR1cm5lZCAwIGFmdGVyIDM4IG1zZWNzDQo+IGNhbGxpbmcgIG1pc2NfaW5pdCsweDAv
MHg4ZQ0KPiBpbml0Y2FsbCBtaXNjX2luaXQrMHgwLzB4OGUgcmV0dXJuZWQgMCBhZnRlciAwIG1z
ZWNzDQo+IGNhbGxpbmcgIGNuX2luaXQrMHgwLzB4ZWUNCj4gaW5pdGNhbGwgY25faW5pdCsweDAv
MHhlZSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgdGlmbV9pbml0KzB4MC8w
eDZkDQo+IGluaXRjYWxsIHRpZm1faW5pdCsweDAvMHg2ZCByZXR1cm5lZCAwIGFmdGVyIDAgbXNl
Y3MNCj4gY2FsbGluZyAgcGh5X2luaXQrMHgwLzB4MjcNCj4gaW5pdGNhbGwgcGh5X2luaXQrMHgw
LzB4MjcgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfZHZiZGV2KzB4
MC8weGFmDQo+IGluaXRjYWxsIGluaXRfZHZiZGV2KzB4MC8weGFmIHJldHVybmVkIDAgYWZ0ZXIg
MCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X3Njc2krMHgwLzB4NzINCj4gU0NTSSBzdWJzeXN0ZW0g
aW5pdGlhbGl6ZWQNCj4gaW5pdGNhbGwgaW5pdF9zY3NpKzB4MC8weDcyIHJldHVybmVkIDAgYWZ0
ZXIgMyBtc2Vjcw0KPiBjYWxsaW5nICBhdGFfaW5pdCsweDAvMHgzMmENCj4gbGliYXRhIHZlcnNp
b24gMy4wMCBsb2FkZWQuDQo+IGluaXRjYWxsIGF0YV9pbml0KzB4MC8weDMyYSByZXR1cm5lZCAw
IGFmdGVyIDMgbXNlY3MNCj4gY2FsbGluZyAgc3BpX2luaXQrMHgwLzB4NzMNCj4gaW5pdGNhbGwg
c3BpX2luaXQrMHgwLzB4NzMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGlu
aXRfcGNtY2lhX2NzKzB4MC8weDI3DQo+IGluaXRjYWxsIGluaXRfcGNtY2lhX2NzKzB4MC8weDI3
IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICB1c2JfaW5pdCsweDAvMHgxMDQN
Cj4gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Jmcw0KPiB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGh1Yg0KPiB1c2Jjb3JlOiByZWdp
c3RlcmVkIG5ldyBkZXZpY2UgZHJpdmVyIHVzYg0KPiBpbml0Y2FsbCB1c2JfaW5pdCsweDAvMHgx
MDQgcmV0dXJuZWQgMCBhZnRlciAxMSBtc2Vjcw0KPiBjYWxsaW5nICBzZXJpb19pbml0KzB4MC8w
eDhiDQo+IGluaXRjYWxsIHNlcmlvX2luaXQrMHgwLzB4OGIgcmV0dXJuZWQgMCBhZnRlciAwIG1z
ZWNzDQo+IGNhbGxpbmcgIGdhbWVwb3J0X2luaXQrMHgwLzB4OGINCj4gaW5pdGNhbGwgZ2FtZXBv
cnRfaW5pdCsweDAvMHg4YiByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5w
dXRfaW5pdCsweDAvMHhmYw0KPiBpbml0Y2FsbCBpbnB1dF9pbml0KzB4MC8weGZjIHJldHVybmVk
IDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpMmNfaW5pdCsweDAvMHg1NQ0KPiBpbml0Y2Fs
bCBpMmNfaW5pdCsweDAvMHg1NSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
cG93ZXJfc3VwcGx5X2NsYXNzX2luaXQrMHgwLzB4MzQNCj4gaW5pdGNhbGwgcG93ZXJfc3VwcGx5
X2NsYXNzX2luaXQrMHgwLzB4MzQgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcg
IHRoZXJtYWxfaW5pdCsweDAvMHgyZg0KPiBpbml0Y2FsbCB0aGVybWFsX2luaXQrMHgwLzB4MmYg
cmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIG1tY19pbml0KzB4MC8weDY1DQo+
IGluaXRjYWxsIG1tY19pbml0KzB4MC8weDY1IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBj
YWxsaW5nICBsZWRzX2luaXQrMHgwLzB4MmENCj4gaW5pdGNhbGwgbGVkc19pbml0KzB4MC8weDJh
IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBkbWFfYnVzX2luaXQrMHgwLzB4
MmINCj4gaW5pdGNhbGwgZG1hX2J1c19pbml0KzB4MC8weDJiIHJldHVybmVkIDAgYWZ0ZXIgMCBt
c2Vjcw0KPiBjYWxsaW5nICBhYzk3X2J1c19pbml0KzB4MC8weGMNCj4gaW5pdGNhbGwgYWM5N19i
dXNfaW5pdCsweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBwY2lf
c3Vic3lzX2luaXQrMHgwLzB4MTE2DQo+IFBDSTogVXNpbmcgQUNQSSBmb3IgSVJRIHJvdXRpbmcN
Cj4gbnVtYmVyIG9mIE1QIElSUSBzb3VyY2VzOiAxNS4NCj4gbnVtYmVyIG9mIElPLUFQSUMgIzIg
cmVnaXN0ZXJzOiAyNC4NCj4gdGVzdGluZyB0aGUgSU8gQVBJQy4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uDQo+IA0KPiBJTyBBUElDICMyLi4uLi4uDQo+IC4uLi4gcmVnaXN0ZXIgIzAwOiAwMDAwMDAw
MA0KPiAuLi4uLi4uICAgIDogcGh5c2ljYWwgQVBJQyBpZDogMDANCj4gLi4uLiByZWdpc3RlciAj
MDE6IDAwMTcwMDIwDQo+IC4uLi4uLi4gICAgIDogbWF4IHJlZGlyZWN0aW9uIGVudHJpZXM6IDAw
MTcNCj4gLi4uLi4uLiAgICAgOiBQUlEgaW1wbGVtZW50ZWQ6IDANCj4gLi4uLi4uLiAgICAgOiBJ
TyBBUElDIHZlcnNpb246IDAwMjANCj4gLi4uLiByZWdpc3RlciAjMDI6IDAwMTcwMDIwDQo+IC4u
Li4uLi4gICAgIDogYXJiaXRyYXRpb246IDAwDQo+IC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRhYmxl
Og0KPiAgTlIgRHN0IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRG1vZCBEZWxpIFZlY3Q6ICAgDQo+
ICAwMCAwMDAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCj4gIDAxIDAwMyAw
ICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzMQ0KPiAgMDIgMDAzIDAgICAgMCAgICAw
ICAgMCAgIDAgICAgMSAgICAxICAgIDMwDQo+ICAwMyAwMDMgMCAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgMzMNCj4gIDA0IDAwMyAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAg
ICAzNA0KPiAgMDUgMDAzIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDM1DQo+ICAw
NiAwMDMgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzYNCj4gIDA3IDAwMyAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzNw0KPiAgMDggMDAzIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDM4DQo+ICAwOSAwMDMgMCAgICAxICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgMzkNCj4gIDBhIDAwMyAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAz
QQ0KPiAgMGIgMDAzIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDNCDQo+ICAwYyAw
MDMgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgM0MNCj4gIDBkIDAwMyAwICAgIDAg
ICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzRA0KPiAgMGUgMDAzIDAgICAgMCAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDNFDQo+ICAwZiAwMDMgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAg
IDEgICAgM0YNCj4gIDEwIDAwMCAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0K
PiAgMTEgMDAwIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQo+ICAxMiAwMDAg
MSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCj4gIDEzIDAwMCAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KPiAgMTQgMDAwIDEgICAgMCAgICAwICAgMCAgIDAg
ICAgMCAgICAwICAgIDAwDQo+ICAxNSAwMDAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAg
ICAgMDANCj4gIDE2IDAwMCAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KPiAg
MTcgMDAwIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQo+IElSUSB0byBwaW4g
bWFwcGluZ3M6DQo+IElSUTAgLT4gMDoyDQo+IElSUTEgLT4gMDoxDQo+IElSUTMgLT4gMDozDQo+
IElSUTQgLT4gMDo0DQo+IElSUTUgLT4gMDo1DQo+IElSUTYgLT4gMDo2DQo+IElSUTcgLT4gMDo3
DQo+IElSUTggLT4gMDo4DQo+IElSUTkgLT4gMDo5DQo+IElSUTEwIC0+IDA6MTANCj4gSVJRMTEg
LT4gMDoxMQ0KPiBJUlExMiAtPiAwOjEyDQo+IElSUTEzIC0+IDA6MTMNCj4gSVJRMTQgLT4gMDox
NA0KPiBJUlExNSAtPiAwOjE1DQo+IC4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
LiBkb25lLg0KPiBpbml0Y2FsbCBwY2lfc3Vic3lzX2luaXQrMHgwLzB4MTE2IHJldHVybmVkIDAg
YWZ0ZXIgMTY0IG1zZWNzDQo+IGNhbGxpbmcgIHByb3RvX2luaXQrMHgwLzB4MmUNCj4gaW5pdGNh
bGwgcHJvdG9faW5pdCsweDAvMHgyZSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGlu
ZyAgbmV0X2Rldl9pbml0KzB4MC8weDE1OA0KPiBpbml0Y2FsbCBuZXRfZGV2X2luaXQrMHgwLzB4
MTU4IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBuZWlnaF9pbml0KzB4MC8w
eDcxDQo+IGluaXRjYWxsIG5laWdoX2luaXQrMHgwLzB4NzEgcmV0dXJuZWQgMCBhZnRlciAwIG1z
ZWNzDQo+IGNhbGxpbmcgIGZpYl9ydWxlc19pbml0KzB4MC8weDljDQo+IGluaXRjYWxsIGZpYl9y
dWxlc19pbml0KzB4MC8weDljIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBn
ZW5sX2luaXQrMHgwLzB4Y2UNCj4gaW5pdGNhbGwgZ2VubF9pbml0KzB4MC8weGNlIHJldHVybmVk
IDAgYWZ0ZXIgMTUgbXNlY3MNCj4gY2FsbGluZyAgY2lwc29fdjRfaW5pdCsweDAvMHg2Nw0KPiBp
bml0Y2FsbCBjaXBzb192NF9pbml0KzB4MC8weDY3IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0K
PiBjYWxsaW5nICB3aXJlbGVzc19ubGV2ZW50X2luaXQrMHgwLzB4MmMNCj4gaW5pdGNhbGwgd2ly
ZWxlc3NfbmxldmVudF9pbml0KzB4MC8weDJjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBj
YWxsaW5nICBjZmc4MDIxMV9pbml0KzB4MC8weDU1DQo+IGluaXRjYWxsIGNmZzgwMjExX2luaXQr
MHgwLzB4NTUgcmV0dXJuZWQgMCBhZnRlciAxMSBtc2Vjcw0KPiBjYWxsaW5nICBpZWVlODAyMTFf
aW5pdCsweDAvMHhkDQo+IGluaXRjYWxsIGllZWU4MDIxMV9pbml0KzB4MC8weGQgcmV0dXJuZWQg
MCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIG5ldGxibF9pbml0KzB4MC8weDgzDQo+IE5ldExh
YmVsOiBJbml0aWFsaXppbmcNCj4gTmV0TGFiZWw6ICBkb21haW4gaGFzaCBzaXplID0gMTI4DQo+
IE5ldExhYmVsOiAgcHJvdG9jb2xzID0gVU5MQUJFTEVEIENJUFNPdjQNCj4gTmV0TGFiZWw6ICB1
bmxhYmVsZWQgdHJhZmZpYyBhbGxvd2VkIGJ5IGRlZmF1bHQNCj4gaW5pdGNhbGwgbmV0bGJsX2lu
aXQrMHgwLzB4ODMgcmV0dXJuZWQgMCBhZnRlciAxMSBtc2Vjcw0KPiBjYWxsaW5nICBzeXNjdGxf
aW5pdCsweDAvMHgyZg0KPiBpbml0Y2FsbCBzeXNjdGxfaW5pdCsweDAvMHgyZiByZXR1cm5lZCAw
IGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgcGNpX2lvbW11X2luaXQrMHgwLzB4MTINCj4gUENJ
LUdBUlQ6IE5vIEFNRCBub3J0aGJyaWRnZSBmb3VuZC4NCj4gaW5pdGNhbGwgcGNpX2lvbW11X2lu
aXQrMHgwLzB4MTIgcmV0dXJuZWQgMCBhZnRlciAzIG1zZWNzDQo+IGNhbGxpbmcgIGhwZXRfbGF0
ZV9pbml0KzB4MC8weGZjDQo+IGhwZXQwOiBhdCBNTUlPIDB4ZmVkMDAwMDAsIElSUXMgMiwgOCwg
MA0KPiBocGV0MDogMyA2NC1iaXQgdGltZXJzLCAxNDMxODE4MCBIeg0KPiBpbml0Y2FsbCBocGV0
X2xhdGVfaW5pdCsweDAvMHhmYyByZXR1cm5lZCAwIGFmdGVyIDcgbXNlY3MNCj4gY2FsbGluZyAg
Y2xvY2tzb3VyY2VfZG9uZV9ib290aW5nKzB4MC8weGQNCj4gaW5pdGNhbGwgY2xvY2tzb3VyY2Vf
ZG9uZV9ib290aW5nKzB4MC8weGQ8Nz5Td2l0Y2hlZCB0byBoaWdoIHJlc29sdXRpb24gbW9kZSBv
biBDUFUgMA0KPiBTd2l0Y2hlZCB0byBoaWdoIHJlc29sdXRpb24gbW9kZSBvbiBDUFUgMQ0KPiAg
cmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIHRyYWNlcl9hbGxvY19idWZmZXJz
KzB4MC8weDRlYw0KPiB0cmFjZXI6IDEyODYgcGFnZXMgYWxsb2NhdGVkIGZvciA2NTUzNiBlbnRy
aWVzIG9mIDgwIGJ5dGVzDQo+ICAgIGFjdHVhbCBlbnRyaWVzIDY1NTg2DQo+IGluaXRjYWxsIHRy
YWNlcl9hbGxvY19idWZmZXJzKzB4MC8weDRlYyByZXR1cm5lZCAwIGFmdGVyIDggbXNlY3MNCj4g
Y2FsbGluZyAgaW5pdF9waXBlX2ZzKzB4MC8weDQyDQo+IGluaXRjYWxsIGluaXRfcGlwZV9mcysw
eDAvMHg0MiByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9tbnRfd3Jp
dGVycysweDAvMHg1OA0KPiBpbml0Y2FsbCBpbml0X21udF93cml0ZXJzKzB4MC8weDU4IHJldHVy
bmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBhbm9uX2lub2RlX2luaXQrMHgwLzB4MTBi
DQo+IGluaXRjYWxsIGFub25faW5vZGVfaW5pdCsweDAvMHgxMGIgcmV0dXJuZWQgMCBhZnRlciAw
IG1zZWNzDQo+IGNhbGxpbmcgIGFjcGlfZXZlbnRfaW5pdCsweDAvMHg1Mg0KPiBpbml0Y2FsbCBh
Y3BpX2V2ZW50X2luaXQrMHgwLzB4NTIgcmV0dXJuZWQgMCBhZnRlciAxMCBtc2Vjcw0KPiBjYWxs
aW5nICBwbnBfc3lzdGVtX2luaXQrMHgwLzB4Yw0KPiBzeXN0ZW0gMDA6MDE6IGlvbWVtIHJhbmdl
IDB4ZjAwMDAwMDAtMHhmM2ZmZmZmZiBoYXMgYmVlbiByZXNlcnZlZA0KPiBzeXN0ZW0gMDA6MDE6
IGlvbWVtIHJhbmdlIDB4ZmVkMTMwMDAtMHhmZWQxM2ZmZiBoYXMgYmVlbiByZXNlcnZlZA0KPiBz
eXN0ZW0gMDA6MDE6IGlvbWVtIHJhbmdlIDB4ZmVkMTQwMDAtMHhmZWQxN2ZmZiBoYXMgYmVlbiBy
ZXNlcnZlZA0KPiBzeXN0ZW0gMDA6MDE6IGlvbWVtIHJhbmdlIDB4ZmVkMTgwMDAtMHhmZWQxOGZm
ZiBoYXMgYmVlbiByZXNlcnZlZA0KPiBzeXN0ZW0gMDA6MDE6IGlvbWVtIHJhbmdlIDB4ZmVkMTkw
MDAtMHhmZWQxOWZmZiBoYXMgYmVlbiByZXNlcnZlZA0KPiBzeXN0ZW0gMDA6MDE6IGlvbWVtIHJh
bmdlIDB4ZmVkMWMwMDAtMHhmZWQxZmZmZiBoYXMgYmVlbiByZXNlcnZlZA0KPiBzeXN0ZW0gMDA6
MDE6IGlvbWVtIHJhbmdlIDB4ZmVkMjAwMDAtMHhmZWQzZmZmZiBoYXMgYmVlbiByZXNlcnZlZA0K
PiBzeXN0ZW0gMDA6MDE6IGlvbWVtIHJhbmdlIDB4ZmVkNDUwMDAtMHhmZWQ5OWZmZiBoYXMgYmVl
biByZXNlcnZlZA0KPiBzeXN0ZW0gMDA6MDE6IGlvbWVtIHJhbmdlIDB4YzAwMDAtMHhkZmZmZiBo
YXMgYmVlbiByZXNlcnZlZA0KPiBzeXN0ZW0gMDA6MDE6IGlvbWVtIHJhbmdlIDB4ZTAwMDAtMHhm
ZmZmZiBjb3VsZCBub3QgYmUgcmVzZXJ2ZWQNCj4gc3lzdGVtIDAwOjA2OiBpb3BvcnQgcmFuZ2Ug
MHg1MDAtMHg1M2YgaGFzIGJlZW4gcmVzZXJ2ZWQNCj4gc3lzdGVtIDAwOjA2OiBpb3BvcnQgcmFu
Z2UgMHg0MDAtMHg0N2YgaGFzIGJlZW4gcmVzZXJ2ZWQNCj4gc3lzdGVtIDAwOjA2OiBpb3BvcnQg
cmFuZ2UgMHg2ODAtMHg2ZmYgaGFzIGJlZW4gcmVzZXJ2ZWQNCj4gaW5pdGNhbGwgcG5wX3N5c3Rl
bV9pbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciA3MCBtc2Vjcw0KPiBjYWxsaW5nICBjaHJf
ZGV2X2luaXQrMHgwLzB4OWENCj4gaW5pdGNhbGwgY2hyX2Rldl9pbml0KzB4MC8weDlhIHJldHVy
bmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBmaXJtd2FyZV9jbGFzc19pbml0KzB4MC8w
eDY4DQo+IGluaXRjYWxsIGZpcm13YXJlX2NsYXNzX2luaXQrMHgwLzB4NjggcmV0dXJuZWQgMCBh
ZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGxvb3BiYWNrX2luaXQrMHgwLzB4Yw0KPiBpbml0Y2Fs
bCBsb29wYmFja19pbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxp
bmcgIGNwdWZyZXFfZ292X3BlcmZvcm1hbmNlX2luaXQrMHgwLzB4Yw0KPiBpbml0Y2FsbCBjcHVm
cmVxX2dvdl9wZXJmb3JtYW5jZV9pbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNz
DQo+IGNhbGxpbmcgIGNwdWZyZXFfZ292X2Ric19pbml0KzB4MC8weDQ3DQo+IGluaXRjYWxsIGNw
dWZyZXFfZ292X2Ric19pbml0KzB4MC8weDQ3IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBj
YWxsaW5nICBzc2JfbW9kaW5pdCsweDAvMHg0Yg0KPiBpbml0Y2FsbCBzc2JfbW9kaW5pdCsweDAv
MHg0YiByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgcGNpYmlvc19hc3NpZ25f
cmVzb3VyY2VzKzB4MC8weDg2DQo+IHBjaSAwMDAwOjAwOjAxLjA6IFBDSSBicmlkZ2UsIHNlY29u
ZGFyeSBidXMgMDAwMDowMQ0KPiBwY2kgMDAwMDowMDowMS4wOiAgIElPIHdpbmRvdzogMHgzMDAw
LTB4M2ZmZg0KPiBwY2kgMDAwMDowMDowMS4wOiAgIE1FTSB3aW5kb3c6IDB4NTAzMDAwMDAtMHg1
MDNmZmZmZg0KPiBwY2kgMDAwMDowMDowMS4wOiAgIFBSRUZFVENIIHdpbmRvdzogMHgwMDAwMDA0
MDAwMDAwMC0weDAwMDAwMDRmZmZmZmZmDQo+IHBjaSAwMDAwOjAwOjFjLjA6IFBDSSBicmlkZ2Us
IHNlY29uZGFyeSBidXMgMDAwMDowMg0KPiBwY2kgMDAwMDowMDoxYy4wOiAgIElPIHdpbmRvdzog
ZGlzYWJsZWQNCj4gcGNpIDAwMDA6MDA6MWMuMDogICBNRU0gd2luZG93OiAweDUwMjAwMDAwLTB4
NTAyZmZmZmYNCj4gcGNpIDAwMDA6MDA6MWMuMDogICBQUkVGRVRDSCB3aW5kb3c6IGRpc2FibGVk
DQo+IHBjaSAwMDAwOjAwOjFjLjQ6IFBDSSBicmlkZ2UsIHNlY29uZGFyeSBidXMgMDAwMDowMw0K
PiBwY2kgMDAwMDowMDoxYy40OiAgIElPIHdpbmRvdzogZGlzYWJsZWQNCj4gcGNpIDAwMDA6MDA6
MWMuNDogICBNRU0gd2luZG93OiBkaXNhYmxlZA0KPiBwY2kgMDAwMDowMDoxYy40OiAgIFBSRUZF
VENIIHdpbmRvdzogZGlzYWJsZWQNCj4gcGNpIDAwMDA6MDA6MWMuNTogUENJIGJyaWRnZSwgc2Vj
b25kYXJ5IGJ1cyAwMDAwOjA0DQo+IHBjaSAwMDAwOjAwOjFjLjU6ICAgSU8gd2luZG93OiAweDIw
MDAtMHgyZmZmDQo+IHBjaSAwMDAwOjAwOjFjLjU6ICAgTUVNIHdpbmRvdzogMHg1MDEwMDAwMC0w
eDUwMWZmZmZmDQo+IHBjaSAwMDAwOjAwOjFjLjU6ICAgUFJFRkVUQ0ggd2luZG93OiBkaXNhYmxl
ZA0KPiBwY2kgMDAwMDowMDoxZS4wOiBQQ0kgYnJpZGdlLCBzZWNvbmRhcnkgYnVzIDAwMDA6MDUN
Cj4gcGNpIDAwMDA6MDA6MWUuMDogICBJTyB3aW5kb3c6IDB4MTAwMC0weDFmZmYNCj4gcGNpIDAw
MDA6MDA6MWUuMDogICBNRU0gd2luZG93OiAweDUwMDAwMDAwLTB4NTAwZmZmZmYNCj4gcGNpIDAw
MDA6MDA6MWUuMDogICBQUkVGRVRDSCB3aW5kb3c6IGRpc2FibGVkDQo+IElPQVBJQ1swXTogU2V0
IHJvdXRpbmcgZW50cnkgKDItMTYgLT4gMHg0OSAtPiBJUlEgMTYgTW9kZToxIEFjdGl2ZToxKQ0K
PiBwY2kgMDAwMDowMDowMS4wOiBQQ0kgSU5UIEEgLT4gR1NJIDE2IChsZXZlbCwgbG93KSAtPiBJ
UlEgMTYNCj4gUENJOiBTZXR0aW5nIGxhdGVuY3kgdGltZXIgb2YgZGV2aWNlIDAwMDA6MDA6MDEu
MCB0byA2NA0KPiBJT0FQSUNbMF06IFNldCByb3V0aW5nIGVudHJ5ICgyLTE3IC0+IDB4NTEgLT4g
SVJRIDE3IE1vZGU6MSBBY3RpdmU6MSkNCj4gcGNpIDAwMDA6MDA6MWMuMDogUENJIElOVCBBIC0+
IEdTSSAxNyAobGV2ZWwsIGxvdykgLT4gSVJRIDE3DQo+IFBDSTogU2V0dGluZyBsYXRlbmN5IHRp
bWVyIG9mIGRldmljZSAwMDAwOjAwOjFjLjAgdG8gNjQNCj4gcGNpIDAwMDA6MDA6MWMuNDogUENJ
IElOVCBBIC0+IEdTSSAxNyAobGV2ZWwsIGxvdykgLT4gSVJRIDE3DQo+IFBDSTogU2V0dGluZyBs
YXRlbmN5IHRpbWVyIG9mIGRldmljZSAwMDAwOjAwOjFjLjQgdG8gNjQNCj4gcGNpIDAwMDA6MDA6
MWMuNTogUENJIElOVCBCIC0+IEdTSSAxNiAobGV2ZWwsIGxvdykgLT4gSVJRIDE2DQo+IFBDSTog
U2V0dGluZyBsYXRlbmN5IHRpbWVyIG9mIGRldmljZSAwMDAwOjAwOjFjLjUgdG8gNjQNCj4gUENJ
OiBTZXR0aW5nIGxhdGVuY3kgdGltZXIgb2YgZGV2aWNlIDAwMDA6MDA6MWUuMCB0byA2NA0KPiBp
bml0Y2FsbCBwY2liaW9zX2Fzc2lnbl9yZXNvdXJjZXMrMHgwLzB4ODYgcmV0dXJuZWQgMCBhZnRl
ciAxNDcgbXNlY3MNCj4gY2FsbGluZyAgaW5ldF9pbml0KzB4MC8weDFkZg0KPiBORVQ6IFJlZ2lz
dGVyZWQgcHJvdG9jb2wgZmFtaWx5IDINCj4gSVAgcm91dGUgY2FjaGUgaGFzaCB0YWJsZSBlbnRy
aWVzOiAzMjc2OCAob3JkZXI6IDYsIDI2MjE0NCBieXRlcykNCj4gVENQIGVzdGFibGlzaGVkIGhh
c2ggdGFibGUgZW50cmllczogMTMxMDcyIChvcmRlcjogOSwgMjA5NzE1MiBieXRlcykNCj4gVENQ
IGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVzOiA2NTUzNiAob3JkZXI6IDgsIDEwNDg1NzYgYnl0ZXMp
DQo+IFRDUDogSGFzaCB0YWJsZXMgY29uZmlndXJlZCAoZXN0YWJsaXNoZWQgMTMxMDcyIGJpbmQg
NjU1MzYpDQo+IFRDUCByZW5vIHJlZ2lzdGVyZWQNCj4gaW5pdGNhbGwgaW5ldF9pbml0KzB4MC8w
eDFkZiByZXR1cm5lZCAwIGFmdGVyIDc1IG1zZWNzDQo+IGNhbGxpbmcgIGFmX3VuaXhfaW5pdCsw
eDAvMHg0Yg0KPiBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDENCj4gaW5pdGNhbGwg
YWZfdW5peF9pbml0KzB4MC8weDRiIHJldHVybmVkIDAgYWZ0ZXIgMyBtc2Vjcw0KPiBjYWxsaW5n
ICBkZWZhdWx0X3Jvb3RmcysweDAvMHg2MQ0KPiBpbml0Y2FsbCBkZWZhdWx0X3Jvb3RmcysweDAv
MHg2MSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaTgyNTlBX2luaXRfc3lz
ZnMrMHgwLzB4MjMNCj4gaW5pdGNhbGwgaTgyNTlBX2luaXRfc3lzZnMrMHgwLzB4MjMgcmV0dXJu
ZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIHZzeXNjYWxsX2luaXQrMHgwLzB4NmMNCj4g
aW5pdGNhbGwgdnN5c2NhbGxfaW5pdCsweDAvMHg2YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MN
Cj4gY2FsbGluZyAgc2JmX2luaXQrMHgwLzB4ZDUNCj4gaW5pdGNhbGwgc2JmX2luaXQrMHgwLzB4
ZDUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGk4MjM3QV9pbml0X3N5c2Zz
KzB4MC8weDIzDQo+IGluaXRjYWxsIGk4MjM3QV9pbml0X3N5c2ZzKzB4MC8weDIzIHJldHVybmVk
IDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBhZGRfcnRjX2Ntb3MrMHgwLzB4MWQNCj4gaW5p
dGNhbGwgYWRkX3J0Y19jbW9zKzB4MC8weDFkIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBj
YWxsaW5nICBjYWNoZV9zeXNmc19pbml0KzB4MC8weDU1DQo+IGluaXRjYWxsIGNhY2hlX3N5c2Zz
X2luaXQrMHgwLzB4NTUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIG1jZV9p
bml0X2RldmljZSsweDAvMHg3Zg0KPiBpbml0Y2FsbCBtY2VfaW5pdF9kZXZpY2UrMHgwLzB4N2Yg
cmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIHBlcmlvZGljX21jaGVja19pbml0
KzB4MC8weDNmDQo+IGluaXRjYWxsIHBlcmlvZGljX21jaGVja19pbml0KzB4MC8weDNmIHJldHVy
bmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICB0aGVybWFsX3Rocm90dGxlX2luaXRfZGV2
aWNlKzB4MC8weDdiDQo+IGluaXRjYWxsIHRoZXJtYWxfdGhyb3R0bGVfaW5pdF9kZXZpY2UrMHgw
LzB4N2IgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIG1pY3JvY29kZV9pbml0
KzB4MC8weGFmDQo+IElBLTMyIE1pY3JvY29kZSBVcGRhdGUgRHJpdmVyOiB2MS4xNGEgPHRpZ3Jh
bkBhaXZhemlhbi5mc25ldC5jby51az4NCj4gaW5pdGNhbGwgbWljcm9jb2RlX2luaXQrMHgwLzB4
YWYgcmV0dXJuZWQgMCBhZnRlciA1IG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfbGFwaWNfc3lzZnMr
MHgwLzB4MmYNCj4gaW5pdGNhbGwgaW5pdF9sYXBpY19zeXNmcysweDAvMHgyZiByZXR1cm5lZCAw
IGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW9hcGljX2luaXRfc3lzZnMrMHgwLzB4OTkNCj4g
aW5pdGNhbGwgaW9hcGljX2luaXRfc3lzZnMrMHgwLzB4OTkgcmV0dXJuZWQgMCBhZnRlciAwIG1z
ZWNzDQo+IGNhbGxpbmcgIHV2X3B0Y19pbml0KzB4MC8weDc1DQo+IGluaXRjYWxsIHV2X3B0Y19p
bml0KzB4MC8weDc1IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICB1dl9iYXVf
aW5pdCsweDAvMHg1ZDANCj4gaW5pdGNhbGwgdXZfYmF1X2luaXQrMHgwLzB4NWQwIHJldHVybmVk
IDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBhdWRpdF9jbGFzc2VzX2luaXQrMHgwLzB4YWYN
Cj4gaW5pdGNhbGwgYXVkaXRfY2xhc3Nlc19pbml0KzB4MC8weGFmIHJldHVybmVkIDAgYWZ0ZXIg
MCBtc2Vjcw0KPiBjYWxsaW5nICBhZXNfaW5pdCsweDAvMHhjDQo+IGluaXRjYWxsIGFlc19pbml0
KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXQrMHgwLzB4
Yw0KPiBpbml0Y2FsbCBpbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNh
bGxpbmcgIGluaXRfdmRzb192YXJzKzB4MC8weDIyNA0KPiBpbml0Y2FsbCBpbml0X3Zkc29fdmFy
cysweDAvMHgyMjQgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGlhMzJfYmlu
Zm10X2luaXQrMHgwLzB4MTQNCj4gaW5pdGNhbGwgaWEzMl9iaW5mbXRfaW5pdCsweDAvMHgxNCBy
ZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgc3lzZW50ZXJfc2V0dXArMHgwLzB4
MmFjDQo+IGluaXRjYWxsIHN5c2VudGVyX3NldHVwKzB4MC8weDJhYyByZXR1cm5lZCAwIGFmdGVy
IDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9hb3V0X2JpbmZtdCsweDAvMHhjDQo+IGluaXRjYWxs
IGluaXRfYW91dF9iaW5mbXQrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2Fs
bGluZyAgY3JlYXRlX3Byb2NfcHJvZmlsZSsweDAvMHgzMTkNCj4gaW5pdGNhbGwgY3JlYXRlX3By
b2NfcHJvZmlsZSsweDAvMHgzMTkgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcg
IGlvcmVzb3VyY2VzX2luaXQrMHgwLzB4M2MNCj4gaW5pdGNhbGwgaW9yZXNvdXJjZXNfaW5pdCsw
eDAvMHgzYyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgdWlkX2NhY2hlX2lu
aXQrMHgwLzB4NmYNCj4gaW5pdGNhbGwgdWlkX2NhY2hlX2luaXQrMHgwLzB4NmYgcmV0dXJuZWQg
MCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfcG9zaXhfdGltZXJzKzB4MC8weGE2DQo+
IGluaXRjYWxsIGluaXRfcG9zaXhfdGltZXJzKzB4MC8weGE2IHJldHVybmVkIDAgYWZ0ZXIgMCBt
c2Vjcw0KPiBjYWxsaW5nICBpbml0X3Bvc2l4X2NwdV90aW1lcnMrMHgwLzB4YzYNCj4gaW5pdGNh
bGwgaW5pdF9wb3NpeF9jcHVfdGltZXJzKzB4MC8weGM2IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vj
cw0KPiBjYWxsaW5nICBuc3Byb3h5X2NhY2hlX2luaXQrMHgwLzB4MmQNCj4gaW5pdGNhbGwgbnNw
cm94eV9jYWNoZV9pbml0KzB4MC8weDJkIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxs
aW5nICB0aW1la2VlcGluZ19pbml0X2RldmljZSsweDAvMHgyMw0KPiBpbml0Y2FsbCB0aW1la2Vl
cGluZ19pbml0X2RldmljZSsweDAvMHgyMyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2Fs
bGluZyAgaW5pdF9jbG9ja3NvdXJjZV9zeXNmcysweDAvMHg1MQ0KPiBpbml0Y2FsbCBpbml0X2Ns
b2Nrc291cmNlX3N5c2ZzKzB4MC8weDUxIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxs
aW5nICBpbml0X3RpbWVyX2xpc3RfcHJvY2ZzKzB4MC8weDJjDQo+IGluaXRjYWxsIGluaXRfdGlt
ZXJfbGlzdF9wcm9jZnMrMHgwLzB4MmMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxp
bmcgIGZ1dGV4X2luaXQrMHgwLzB4NjcNCj4gaW5pdGNhbGwgZnV0ZXhfaW5pdCsweDAvMHg2NyBy
ZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgcHJvY19kbWFfaW5pdCsweDAvMHgy
Mg0KPiBpbml0Y2FsbCBwcm9jX2RtYV9pbml0KzB4MC8weDIyIHJldHVybmVkIDAgYWZ0ZXIgMCBt
c2Vjcw0KPiBjYWxsaW5nICBrYWxsc3ltc19pbml0KzB4MC8weDI1DQo+IGluaXRjYWxsIGthbGxz
eW1zX2luaXQrMHgwLzB4MjUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIHBp
ZF9uYW1lc3BhY2VzX2luaXQrMHgwLzB4MmQNCj4gaW5pdGNhbGwgcGlkX25hbWVzcGFjZXNfaW5p
dCsweDAvMHgyZCByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgYXVkaXRfaW5p
dCsweDAvMHgxMjYNCj4gYXVkaXQ6IGluaXRpYWxpemluZyBuZXRsaW5rIHNvY2tldCAoZGlzYWJs
ZWQpDQo+IHR5cGU9MjAwMCBhdWRpdCgxMjE2NjUwOTYzLjM5MjoxKTogaW5pdGlhbGl6ZWQNCj4g
aW5pdGNhbGwgYXVkaXRfaW5pdCsweDAvMHgxMjYgcmV0dXJuZWQgMCBhZnRlciA4IG1zZWNzDQo+
IGNhbGxpbmcgIHJlbGF5X2luaXQrMHgwLzB4MTQNCj4gaW5pdGNhbGwgcmVsYXlfaW5pdCsweDAv
MHgxNCByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgdXRzbmFtZV9zeXNjdGxf
aW5pdCsweDAvMHgxNA0KPiBpbml0Y2FsbCB1dHNuYW1lX3N5c2N0bF9pbml0KzB4MC8weDE0IHJl
dHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X3N0YWNrX3RyYWNlKzB4MC8w
eGMNCj4gVGVzdGluZyB0cmFjZXIgc3lzcHJvZjogUEFTU0VEDQo+IGluaXRjYWxsIGluaXRfc3Rh
Y2tfdHJhY2UrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDk4IG1zZWNzDQo+IGNhbGxpbmcgIGlu
aXRfcGVyX3pvbmVfcGFnZXNfbWluKzB4MC8weDQ1DQo+IGluaXRjYWxsIGluaXRfcGVyX3pvbmVf
cGFnZXNfbWluKzB4MC8weDQ1IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBw
ZGZsdXNoX2luaXQrMHgwLzB4MTMNCj4gaW5pdGNhbGwgcGRmbHVzaF9pbml0KzB4MC8weDEzIHJl
dHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBrc3dhcGRfaW5pdCsweDAvMHg1ZQ0K
PiBpbml0Y2FsbCBrc3dhcGRfaW5pdCsweDAvMHg1ZSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MN
Cj4gY2FsbGluZyAgc2V0dXBfdm1zdGF0KzB4MC8weDQyDQo+IGluaXRjYWxsIHNldHVwX3Ztc3Rh
dCsweDAvMHg0MiByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF90bXBm
cysweDAvMHgzZA0KPiBpbml0Y2FsbCBpbml0X3RtcGZzKzB4MC8weDNkIHJldHVybmVkIDAgYWZ0
ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBmYXN5bmNfaW5pdCsweDAvMHgyYQ0KPiBpbml0Y2FsbCBm
YXN5bmNfaW5pdCsweDAvMHgyYSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
YWlvX3NldHVwKzB4MC8weDZlDQo+IGluaXRjYWxsIGFpb19zZXR1cCsweDAvMHg2ZSByZXR1cm5l
ZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5vdGlmeV9zZXR1cCsweDAvMHhkDQo+IGlu
aXRjYWxsIGlub3RpZnlfc2V0dXArMHgwLzB4ZCByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4g
Y2FsbGluZyAgaW5vdGlmeV91c2VyX3NldHVwKzB4MC8weGI4DQo+IGluaXRjYWxsIGlub3RpZnlf
dXNlcl9zZXR1cCsweDAvMHhiOCByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
aW5pdF9zeXMzMl9pb2N0bCsweDAvMHg4NQ0KPiBpbml0Y2FsbCBpbml0X3N5czMyX2lvY3RsKzB4
MC8weDg1IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X21iY2FjaGUr
MHgwLzB4MTQNCj4gaW5pdGNhbGwgaW5pdF9tYmNhY2hlKzB4MC8weDE0IHJldHVybmVkIDAgYWZ0
ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBkcXVvdF9pbml0KzB4MC8weGRlDQo+IFZGUzogRGlzayBx
dW90YXMgZHF1b3RfNi41LjENCj4gRHF1b3QtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIg
KG9yZGVyIDAsIDQwOTYgYnl0ZXMpDQo+IGluaXRjYWxsIGRxdW90X2luaXQrMHgwLzB4ZGUgcmV0
dXJuZWQgMCBhZnRlciA3IG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfdjJfcXVvdGFfZm9ybWF0KzB4
MC8weGMNCj4gaW5pdGNhbGwgaW5pdF92Ml9xdW90YV9mb3JtYXQrMHgwLzB4YyByZXR1cm5lZCAw
IGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgZG5vdGlmeV9pbml0KzB4MC8weDJhDQo+IGluaXRj
YWxsIGRub3RpZnlfaW5pdCsweDAvMHgyYSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2Fs
bGluZyAgY29uZmlnZnNfaW5pdCsweDAvMHhhOQ0KPiBpbml0Y2FsbCBjb25maWdmc19pbml0KzB4
MC8weGE5IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X2RldnB0c19m
cysweDAvMHgzNQ0KPiBpbml0Y2FsbCBpbml0X2RldnB0c19mcysweDAvMHgzNSByZXR1cm5lZCAw
IGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9leHQzX2ZzKzB4MC8weDYwDQo+IGluaXRj
YWxsIGluaXRfZXh0M19mcysweDAvMHg2MCByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2Fs
bGluZyAgam91cm5hbF9pbml0KzB4MC8weGM1DQo+IGluaXRjYWxsIGpvdXJuYWxfaW5pdCsweDAv
MHhjNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9leHQyX2ZzKzB4
MC8weDYwDQo+IGluaXRjYWxsIGluaXRfZXh0Ml9mcysweDAvMHg2MCByZXR1cm5lZCAwIGFmdGVy
IDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9yYW1mc19mcysweDAvMHhjDQo+IGluaXRjYWxsIGlu
aXRfcmFtZnNfZnMrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
aW5pdF9mYXRfZnMrMHgwLzB4NDUNCj4gaW5pdGNhbGwgaW5pdF9mYXRfZnMrMHgwLzB4NDUgcmV0
dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfbXNkb3NfZnMrMHgwLzB4Yw0K
PiBpbml0Y2FsbCBpbml0X21zZG9zX2ZzKzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNz
DQo+IGNhbGxpbmcgIGluaXRfdmZhdF9mcysweDAvMHhjDQo+IGluaXRjYWxsIGluaXRfdmZhdF9m
cysweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X2lzbzk2
NjBfZnMrMHgwLzB4NTANCj4gaW5pdGNhbGwgaW5pdF9pc285NjYwX2ZzKzB4MC8weDUwIHJldHVy
bmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X2hmc3BsdXNfZnMrMHgwLzB4NTMN
Cj4gaW5pdGNhbGwgaW5pdF9oZnNwbHVzX2ZzKzB4MC8weDUzIHJldHVybmVkIDAgYWZ0ZXIgMCBt
c2Vjcw0KPiBjYWxsaW5nICBpbml0X2hmc19mcysweDAvMHg1Mw0KPiBpbml0Y2FsbCBpbml0X2hm
c19mcysweDAvMHg1MyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgdnhmc19p
bml0KzB4MC8weDRmDQo+IGluaXRjYWxsIHZ4ZnNfaW5pdCsweDAvMHg0ZiByZXR1cm5lZCAwIGFm
dGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9ubHNfY3A0MzcrMHgwLzB4Yw0KPiBpbml0Y2Fs
bCBpbml0X25sc19jcDQzNysweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxs
aW5nICBpbml0X25sc19jcDc3NSsweDAvMHhjDQo+IGluaXRjYWxsIGluaXRfbmxzX2NwNzc1KzB4
MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfbmxzX2NwODUw
KzB4MC8weGMNCj4gaW5pdGNhbGwgaW5pdF9ubHNfY3A4NTArMHgwLzB4YyByZXR1cm5lZCAwIGFm
dGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9ubHNfY3A4NTIrMHgwLzB4Yw0KPiBpbml0Y2Fs
bCBpbml0X25sc19jcDg1MisweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxs
aW5nICBpbml0X25sc19jcDg1NSsweDAvMHhjDQo+IGluaXRjYWxsIGluaXRfbmxzX2NwODU1KzB4
MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfbmxzX2NwODU3
KzB4MC8weGMNCj4gaW5pdGNhbGwgaW5pdF9ubHNfY3A4NTcrMHgwLzB4YyByZXR1cm5lZCAwIGFm
dGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9ubHNfY3A4NjArMHgwLzB4Yw0KPiBpbml0Y2Fs
bCBpbml0X25sc19jcDg2MCsweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxs
aW5nICBpbml0X25sc19jcDg2MysweDAvMHhjDQo+IGluaXRjYWxsIGluaXRfbmxzX2NwODYzKzB4
MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfbmxzX2NwODc0
KzB4MC8weGMNCj4gaW5pdGNhbGwgaW5pdF9ubHNfY3A4NzQrMHgwLzB4YyByZXR1cm5lZCAwIGFm
dGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9ubHNfY3A5MzIrMHgwLzB4Yw0KPiBpbml0Y2Fs
bCBpbml0X25sc19jcDkzMisweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxs
aW5nICBpbml0X25sc19ldWNfanArMHgwLzB4NGENCj4gaW5pdGNhbGwgaW5pdF9ubHNfZXVjX2pw
KzB4MC8weDRhIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X25sc19j
cDkzNisweDAvMHhjDQo+IGluaXRjYWxsIGluaXRfbmxzX2NwOTM2KzB4MC8weGMgcmV0dXJuZWQg
MCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfbmxzX2FzY2lpKzB4MC8weGMNCj4gaW5p
dGNhbGwgaW5pdF9ubHNfYXNjaWkrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4g
Y2FsbGluZyAgaW5pdF9ubHNfaXNvODg1OV8xKzB4MC8weGMNCj4gaW5pdGNhbGwgaW5pdF9ubHNf
aXNvODg1OV8xKzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGlu
aXRfbmxzX2lzbzg4NTlfNSsweDAvMHhjDQo+IGluaXRjYWxsIGluaXRfbmxzX2lzbzg4NTlfNSsw
eDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X25sc19pc284
ODU5XzEzKzB4MC8weGMNCj4gaW5pdGNhbGwgaW5pdF9ubHNfaXNvODg1OV8xMysweDAvMHhjIHJl
dHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X25sc19pc284ODU5XzE0KzB4
MC8weGMNCj4gaW5pdGNhbGwgaW5pdF9ubHNfaXNvODg1OV8xNCsweDAvMHhjIHJldHVybmVkIDAg
YWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X25sc19rb2k4X3IrMHgwLzB4Yw0KPiBpbml0
Y2FsbCBpbml0X25sc19rb2k4X3IrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4g
Y2FsbGluZyAgaW5pdF9ubHNfdXRmOCsweDAvMHgxZg0KPiBpbml0Y2FsbCBpbml0X25sc191dGY4
KzB4MC8weDFmIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X3N5c3Zf
ZnMrMHgwLzB4NDUNCj4gaW5pdGNhbGwgaW5pdF9zeXN2X2ZzKzB4MC8weDQ1IHJldHVybmVkIDAg
YWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X2hwZnNfZnMrMHgwLzB4NTANCj4gaW5pdGNh
bGwgaW5pdF9ocGZzX2ZzKzB4MC8weDUwIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxs
aW5nICBpbml0X250ZnNfZnMrMHgwLzB4MWM3DQo+IE5URlMgZHJpdmVyIDIuMS4yOSBbRmxhZ3M6
IFIvT10uDQo+IGluaXRjYWxsIGluaXRfbnRmc19mcysweDAvMHgxYzcgcmV0dXJuZWQgMCBhZnRl
ciAyIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfcW54NF9mcysweDAvMHg2Mg0KPiBRTlg0IGZpbGVz
eXN0ZW0gMC4yLjMgcmVnaXN0ZXJlZC4NCj4gaW5pdGNhbGwgaW5pdF9xbng0X2ZzKzB4MC8weDYy
IHJldHVybmVkIDAgYWZ0ZXIgMyBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X2FkZnNfZnMrMHgwLzB4
NTANCj4gaW5pdGNhbGwgaW5pdF9hZGZzX2ZzKzB4MC8weDUwIHJldHVybmVkIDAgYWZ0ZXIgMCBt
c2Vjcw0KPiBjYWxsaW5nICBmdXNlX2luaXQrMHgwLzB4MTFiDQo+IGZ1c2UgaW5pdCAoQVBJIHZl
cnNpb24gNy45KQ0KPiBpbml0Y2FsbCBmdXNlX2luaXQrMHgwLzB4MTFiIHJldHVybmVkIDAgYWZ0
ZXIgMiBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X2pmc19mcysweDAvMHgxZGINCj4gSkZTOiBuVHhC
bG9jayA9IDc2NDUsIG5UeExvY2sgPSA2MTE2MQ0KPiBpbml0Y2FsbCBpbml0X2pmc19mcysweDAv
MHgxZGIgcmV0dXJuZWQgMCBhZnRlciA4IG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfYmVmc19mcysw
eDAvMHg3MQ0KPiBCZUZTIHZlcnNpb246IDAuOS4zDQo+IGluaXRjYWxsIGluaXRfYmVmc19mcysw
eDAvMHg3MSByZXR1cm5lZCAwIGFmdGVyIDEgbXNlY3MNCj4gY2FsbGluZyAgb2NmczJfaW5pdCsw
eDAvMHgyOGINCj4gT0NGUzIgMS41LjANCj4gaW5pdGNhbGwgb2NmczJfaW5pdCsweDAvMHgyOGIg
cmV0dXJuZWQgMCBhZnRlciAxIG1zZWNzDQo+IGNhbGxpbmcgIG9jZnMyX3N0YWNrX2dsdWVfaW5p
dCsweDAvMHg4Yg0KPiBpbml0Y2FsbCBvY2ZzMl9zdGFja19nbHVlX2luaXQrMHgwLzB4OGIgcmV0
dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfbzJubSsweDAvMHg5MA0KPiBP
Q0ZTMiBOb2RlIE1hbmFnZXIgMS41LjANCj4gaW5pdGNhbGwgaW5pdF9vMm5tKzB4MC8weDkwIHJl
dHVybmVkIDAgYWZ0ZXIgMiBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X2dmczJfZnMrMHgwLzB4MTZl
DQo+IEdGUzIgKGJ1aWx0IEp1bCAyMSAyMDA4IDE2OjEzOjE4KSBpbnN0YWxsZWQNCj4gaW5pdGNh
bGwgaW5pdF9nZnMyX2ZzKzB4MC8weDE2ZSByZXR1cm5lZCAwIGFmdGVyIDUgbXNlY3MNCj4gY2Fs
bGluZyAgaW5pdF9tcXVldWVfZnMrMHgwLzB4YmMNCj4gaW5pdGNhbGwgaW5pdF9tcXVldWVfZnMr
MHgwLzB4YmMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGtleV9wcm9jX2lu
aXQrMHgwLzB4NTkNCj4gaW5pdGNhbGwga2V5X3Byb2NfaW5pdCsweDAvMHg1OSByZXR1cm5lZCAw
IGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgc2VsaW51eF9uZl9pcF9pbml0KzB4MC8weDRjDQo+
IFNFTGludXg6ICBSZWdpc3RlcmluZyBuZXRmaWx0ZXIgaG9va3MNCj4gaW5pdGNhbGwgc2VsaW51
eF9uZl9pcF9pbml0KzB4MC8weDRjIHJldHVybmVkIDAgYWZ0ZXIgMyBtc2Vjcw0KPiBjYWxsaW5n
ICBpbml0X3NlbF9mcysweDAvMHg1ZQ0KPiBpbml0Y2FsbCBpbml0X3NlbF9mcysweDAvMHg1ZSBy
ZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgc2VsbmxfaW5pdCsweDAvMHg0ZA0K
PiBpbml0Y2FsbCBzZWxubF9pbml0KzB4MC8weDRkIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0K
PiBjYWxsaW5nICBzZWxfbmV0aWZfaW5pdCsweDAvMHg2Ng0KPiBpbml0Y2FsbCBzZWxfbmV0aWZf
aW5pdCsweDAvMHg2NiByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgc2VsX25l
dG5vZGVfaW5pdCsweDAvMHg3OA0KPiBpbml0Y2FsbCBzZWxfbmV0bm9kZV9pbml0KzB4MC8weDc4
IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBzZWxfbmV0cG9ydF9pbml0KzB4
MC8weDc4DQo+IGluaXRjYWxsIHNlbF9uZXRwb3J0X2luaXQrMHgwLzB4NzggcmV0dXJuZWQgMCBh
ZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGF1cnVsZV9pbml0KzB4MC8weDM3DQo+IGluaXRjYWxs
IGF1cnVsZV9pbml0KzB4MC8weDM3IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5n
ICBjcnlwdG9fYWxnYXBpX2luaXQrMHgwLzB4ZA0KPiBpbml0Y2FsbCBjcnlwdG9fYWxnYXBpX2lu
aXQrMHgwLzB4ZCByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgYmxrY2lwaGVy
X21vZHVsZV9pbml0KzB4MC8weDIwDQo+IGluaXRjYWxsIGJsa2NpcGhlcl9tb2R1bGVfaW5pdCsw
eDAvMHgyMCByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgc2VxaXZfbW9kdWxl
X2luaXQrMHgwLzB4Yw0KPiBpbml0Y2FsbCBzZXFpdl9tb2R1bGVfaW5pdCsweDAvMHhjIHJldHVy
bmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBjcnlwdG9tZ3JfaW5pdCsweDAvMHhjDQo+
IGluaXRjYWxsIGNyeXB0b21ncl9pbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNz
DQo+IGNhbGxpbmcgIGhtYWNfbW9kdWxlX2luaXQrMHgwLzB4Yw0KPiBpbml0Y2FsbCBobWFjX21v
ZHVsZV9pbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIG1k
NV9tb2RfaW5pdCsweDAvMHhjDQo+IGluaXRjYWxsIG1kNV9tb2RfaW5pdCsweDAvMHhjIHJldHVy
bmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBybWQxMjhfbW9kX2luaXQrMHgwLzB4Yw0K
PiBpbml0Y2FsbCBybWQxMjhfbW9kX2luaXQrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNl
Y3MNCj4gY2FsbGluZyAgcm1kMTYwX21vZF9pbml0KzB4MC8weGMNCj4gaW5pdGNhbGwgcm1kMTYw
X21vZF9pbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIHJt
ZDI1Nl9tb2RfaW5pdCsweDAvMHhjDQo+IGluaXRjYWxsIHJtZDI1Nl9tb2RfaW5pdCsweDAvMHhj
IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBybWQzMjBfbW9kX2luaXQrMHgw
LzB4Yw0KPiBpbml0Y2FsbCBybWQzMjBfbW9kX2luaXQrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVy
IDAgbXNlY3MNCj4gY2FsbGluZyAgc2hhMV9nZW5lcmljX21vZF9pbml0KzB4MC8weGMNCj4gaW5p
dGNhbGwgc2hhMV9nZW5lcmljX21vZF9pbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1z
ZWNzDQo+IGNhbGxpbmcgIHNoYTI1Nl9nZW5lcmljX21vZF9pbml0KzB4MC8weDM1DQo+IGluaXRj
YWxsIHNoYTI1Nl9nZW5lcmljX21vZF9pbml0KzB4MC8weDM1IHJldHVybmVkIDAgYWZ0ZXIgMCBt
c2Vjcw0KPiBjYWxsaW5nICBzaGE1MTJfZ2VuZXJpY19tb2RfaW5pdCsweDAvMHgzNQ0KPiBpbml0
Y2FsbCBzaGE1MTJfZ2VuZXJpY19tb2RfaW5pdCsweDAvMHgzNSByZXR1cm5lZCAwIGFmdGVyIDAg
bXNlY3MNCj4gY2FsbGluZyAgY3J5cHRvX2VjYl9tb2R1bGVfaW5pdCsweDAvMHhjDQo+IGluaXRj
YWxsIGNyeXB0b19lY2JfbW9kdWxlX2luaXQrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNl
Y3MNCj4gY2FsbGluZyAgY3J5cHRvX2NiY19tb2R1bGVfaW5pdCsweDAvMHhjDQo+IGluaXRjYWxs
IGNyeXB0b19jYmNfbW9kdWxlX2luaXQrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MN
Cj4gY2FsbGluZyAgY3J5cHRvX21vZHVsZV9pbml0KzB4MC8weGMNCj4gaW5pdGNhbGwgY3J5cHRv
X21vZHVsZV9pbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcg
IGNyeXB0b19jdHJfbW9kdWxlX2luaXQrMHgwLzB4MzUNCj4gaW5pdGNhbGwgY3J5cHRvX2N0cl9t
b2R1bGVfaW5pdCsweDAvMHgzNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
Y3J5cHRvX2NjbV9tb2R1bGVfaW5pdCsweDAvMHg1Mw0KPiBpbml0Y2FsbCBjcnlwdG9fY2NtX21v
ZHVsZV9pbml0KzB4MC8weDUzIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBj
cnlwdGRfaW5pdCsweDAvMHhhOA0KPiBpbml0Y2FsbCBjcnlwdGRfaW5pdCsweDAvMHhhOCByZXR1
cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgZGVzX2dlbmVyaWNfbW9kX2luaXQrMHgw
LzB4MzUNCj4gaW5pdGNhbGwgZGVzX2dlbmVyaWNfbW9kX2luaXQrMHgwLzB4MzUgcmV0dXJuZWQg
MCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGZjcnlwdF9tb2RfaW5pdCsweDAvMHhjDQo+IGlu
aXRjYWxsIGZjcnlwdF9tb2RfaW5pdCsweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0K
PiBjYWxsaW5nICBzZXJwZW50X21vZF9pbml0KzB4MC8weDM1DQo+IGluaXRjYWxsIHNlcnBlbnRf
bW9kX2luaXQrMHgwLzB4MzUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGFl
c19pbml0KzB4MC8weDJjNQ0KPiBpbml0Y2FsbCBhZXNfaW5pdCsweDAvMHgyYzUgcmV0dXJuZWQg
MCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGNhc3Q1X21vZF9pbml0KzB4MC8weGMNCj4gaW5p
dGNhbGwgY2FzdDVfbW9kX2luaXQrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4g
Y2FsbGluZyAgYXJjNF9pbml0KzB4MC8weGMNCj4gaW5pdGNhbGwgYXJjNF9pbml0KzB4MC8weGMg
cmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGtoYXphZF9tb2RfaW5pdCsweDAv
MHhjDQo+IGluaXRjYWxsIGtoYXphZF9tb2RfaW5pdCsweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIg
MCBtc2Vjcw0KPiBjYWxsaW5nICBzZWVkX2luaXQrMHgwLzB4Yw0KPiBpbml0Y2FsbCBzZWVkX2lu
aXQrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgc2Fsc2EyMF9n
ZW5lcmljX21vZF9pbml0KzB4MC8weGMNCj4gaW5pdGNhbGwgc2Fsc2EyMF9nZW5lcmljX21vZF9p
bml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGRlZmxhdGVf
bW9kX2luaXQrMHgwLzB4Yw0KPiBpbml0Y2FsbCBkZWZsYXRlX21vZF9pbml0KzB4MC8weGMgcmV0
dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGNyYzMyY19tb2RfaW5pdCsweDAvMHgz
NQ0KPiBpbml0Y2FsbCBjcmMzMmNfbW9kX2luaXQrMHgwLzB4MzUgcmV0dXJuZWQgMCBhZnRlciAw
IG1zZWNzDQo+IGNhbGxpbmcgIGNyeXB0b19hdXRoZW5jX21vZHVsZV9pbml0KzB4MC8weGMNCj4g
aW5pdGNhbGwgY3J5cHRvX2F1dGhlbmNfbW9kdWxlX2luaXQrMHgwLzB4YyByZXR1cm5lZCAwIGFm
dGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgYnNnX2luaXQrMHgwLzB4MTFhDQo+IEJsb2NrIGxheWVy
IFNDU0kgZ2VuZXJpYyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQgbG9hZGVkIChtYWpvciAyNTQp
DQo+IGluaXRjYWxsIGJzZ19pbml0KzB4MC8weDExYSByZXR1cm5lZCAwIGFmdGVyIDUgbXNlY3MN
Cj4gY2FsbGluZyAgbm9vcF9pbml0KzB4MC8weDE0DQo+IGlvIHNjaGVkdWxlciBub29wIHJlZ2lz
dGVyZWQgKGRlZmF1bHQpDQo+IGluaXRjYWxsIG5vb3BfaW5pdCsweDAvMHgxNCByZXR1cm5lZCAw
IGFmdGVyIDMgbXNlY3MNCj4gY2FsbGluZyAgcGVyY3B1X2NvdW50ZXJfc3RhcnR1cCsweDAvMHgx
NA0KPiBpbml0Y2FsbCBwZXJjcHVfY291bnRlcl9zdGFydHVwKzB4MC8weDE0IHJldHVybmVkIDAg
YWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBwY2lfaW5pdCsweDAvMHgyYg0KPiBwY2kgMDAwMDow
MTowMC4wOiBCb290IHZpZGVvIGRldmljZQ0KPiBpbml0Y2FsbCBwY2lfaW5pdCsweDAvMHgyYiBy
ZXR1cm5lZCAwIGFmdGVyIDMgbXNlY3MNCj4gY2FsbGluZyAgcGNpX3Byb2NfaW5pdCsweDAvMHg2
MA0KPiBpbml0Y2FsbCBwY2lfcHJvY19pbml0KzB4MC8weDYwIHJldHVybmVkIDAgYWZ0ZXIgMCBt
c2Vjcw0KPiBjYWxsaW5nICBsdHYzNTBxdl9pbml0KzB4MC8weGMNCj4gaW5pdGNhbGwgbHR2MzUw
cXZfaW5pdCsweDAvMHhjIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBjb3Jn
aWJsX2luaXQrMHgwLzB4Yw0KPiBpbml0Y2FsbCBjb3JnaWJsX2luaXQrMHgwLzB4YyByZXR1cm5l
ZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgYWNwaV9yZXNlcnZlX3Jlc291cmNlcysweDAv
MHhlYg0KPiBpbml0Y2FsbCBhY3BpX3Jlc2VydmVfcmVzb3VyY2VzKzB4MC8weGViIHJldHVybmVk
IDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBhY3BpX2FjX2luaXQrMHgwLzB4MjgNCj4gaW5p
dGNhbGwgYWNwaV9hY19pbml0KzB4MC8weDI4IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBj
YWxsaW5nICBhY3BpX2JhdHRlcnlfaW5pdCsweDAvMHgyOA0KPiBpbml0Y2FsbCBhY3BpX2JhdHRl
cnlfaW5pdCsweDAvMHgyOCByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgYWNw
aV9idXR0b25faW5pdCsweDAvMHg1ZQ0KPiBpbnB1dDogUG93ZXIgQnV0dG9uIChGRikgYXMgL2Ns
YXNzL2lucHV0L2lucHV0MA0KPiBBQ1BJOiBQb3dlciBCdXR0b24gKEZGKSBbUFdSRl0NCj4gaW5w
dXQ6IFNsZWVwIEJ1dHRvbiAoQ00pIGFzIC9jbGFzcy9pbnB1dC9pbnB1dDENCj4gQUNQSTogU2xl
ZXAgQnV0dG9uIChDTSkgW1NMUEJdDQo+IGluaXRjYWxsIGFjcGlfYnV0dG9uX2luaXQrMHgwLzB4
NWUgcmV0dXJuZWQgMCBhZnRlciAxNCBtc2Vjcw0KPiBjYWxsaW5nICBhY3BpX2Zhbl9pbml0KzB4
MC8weDVlDQo+IGluaXRjYWxsIGFjcGlfZmFuX2luaXQrMHgwLzB4NWUgcmV0dXJuZWQgMCBhZnRl
ciAwIG1zZWNzDQo+IGNhbGxpbmcgIGlycXJvdXRlcl9pbml0X3N5c2ZzKzB4MC8weDM3DQo+IGlu
aXRjYWxsIGlycXJvdXRlcl9pbml0X3N5c2ZzKzB4MC8weDM3IHJldHVybmVkIDAgYWZ0ZXIgMCBt
c2Vjcw0KPiBjYWxsaW5nICBhY3BpX3Byb2Nlc3Nvcl9pbml0KzB4MC8weGY2DQo+IEFDUEk6IEFD
UEkwMDA3OjAwIGlzIHJlZ2lzdGVyZWQgYXMgY29vbGluZ19kZXZpY2UwDQo+IEFDUEk6IEFDUEkw
MDA3OjAxIGlzIHJlZ2lzdGVyZWQgYXMgY29vbGluZ19kZXZpY2UxDQo+IGluaXRjYWxsIGFjcGlf
cHJvY2Vzc29yX2luaXQrMHgwLzB4ZjYgcmV0dXJuZWQgMCBhZnRlciA5IG1zZWNzDQo+IGNhbGxp
bmcgIGFjcGlfY29udGFpbmVyX2luaXQrMHgwLzB4NDMNCj4gaW5pdGNhbGwgYWNwaV9jb250YWlu
ZXJfaW5pdCsweDAvMHg0MyByZXR1cm5lZCAwIGFmdGVyIDEgbXNlY3MNCj4gY2FsbGluZyAgdG9z
aGliYV9hY3BpX2luaXQrMHgwLzB4MTdkDQo+IGluaXRjYWxsIHRvc2hpYmFfYWNwaV9pbml0KzB4
MC8weDE3ZCByZXR1cm5lZCAtMTkgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBhY3BpX3NtYl9o
Y19pbml0KzB4MC8weDE4DQo+IGluaXRjYWxsIGFjcGlfc21iX2hjX2luaXQrMHgwLzB4MTggcmV0
dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGFjcGlfc2JzX2luaXQrMHgwLzB4MjgN
Cj4gaW5pdGNhbGwgYWNwaV9zYnNfaW5pdCsweDAvMHgyOCByZXR1cm5lZCAwIGFmdGVyIDAgbXNl
Y3MNCj4gY2FsbGluZyAgcmFuZF9pbml0aWFsaXplKzB4MC8weDJjDQo+IGluaXRjYWxsIHJhbmRf
aW5pdGlhbGl6ZSsweDAvMHgyYyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
dHR5X2luaXQrMHgwLzB4MWM1DQo+IGluaXRjYWxsIHR0eV9pbml0KzB4MC8weDFjNSByZXR1cm5l
ZCAwIGFmdGVyIDI1IG1zZWNzDQo+IGNhbGxpbmcgIHB0eV9pbml0KzB4MC8weDQ2ZQ0KPiBpbml0
Y2FsbCBwdHlfaW5pdCsweDAvMHg0NmUgcmV0dXJuZWQgMCBhZnRlciAyNSBtc2Vjcw0KPiBjYWxs
aW5nICByYXdfaW5pdCsweDAvMHhjYQ0KPiBpbml0Y2FsbCByYXdfaW5pdCsweDAvMHhjYSByZXR1
cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgcjM5NjRfaW5pdCsweDAvMHgzYQ0KPiBy
Mzk2NDogUGhpbGlwcyByMzk2NCBEcml2ZXIgJFJldmlzaW9uOiAxLjEwICQNCj4gaW5pdGNhbGwg
cjM5NjRfaW5pdCsweDAvMHgzYSByZXR1cm5lZCAwIGFmdGVyIDMgbXNlY3MNCj4gY2FsbGluZyAg
YXBwbGljb21faW5pdCsweDAvMHg0Y2MNCj4gQXBwbGljb20gZHJpdmVyOiAkSWQ6IGFjLmMsdiAx
LjMwIDIwMDAvMDMvMjIgMTY6MDM6NTcgZHdtdzIgRXhwICQNCj4gYWMubzogTm8gUENJIGJvYXJk
cyBmb3VuZC4NCj4gYWMubzogRm9yIGFuIElTQSBib2FyZCB5b3UgbXVzdCBzdXBwbHkgbWVtb3J5
IGFuZCBpcnEgcGFyYW1ldGVycy4NCj4gaW5pdGNhbGwgYXBwbGljb21faW5pdCsweDAvMHg0Y2Mg
cmV0dXJuZWQgLTYgYWZ0ZXIgMTMgbXNlY3MNCj4gaW5pdGNhbGwgYXBwbGljb21faW5pdCsweDAv
MHg0Y2MgcmV0dXJuZWQgd2l0aCBlcnJvciBjb2RlIC02IA0KPiBjYWxsaW5nICBydGNfaW5pdCsw
eDAvMHhhNg0KPiBSZWFsIFRpbWUgQ2xvY2sgRHJpdmVyIHYxLjEyYWMNCj4gaW5pdGNhbGwgcnRj
X2luaXQrMHgwLzB4YTYgcmV0dXJuZWQgMCBhZnRlciAyIG1zZWNzDQo+IGNhbGxpbmcgIGhwZXRf
aW5pdCsweDAvMHg2MA0KPiBpbml0Y2FsbCBocGV0X2luaXQrMHgwLzB4NjAgcmV0dXJuZWQgMCBh
ZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIG52cmFtX2luaXQrMHgwLzB4ODANCj4gTm9uLXZvbGF0
aWxlIG1lbW9yeSBkcml2ZXIgdjEuMg0KPiBpbml0Y2FsbCBudnJhbV9pbml0KzB4MC8weDgwIHJl
dHVybmVkIDAgYWZ0ZXIgMiBtc2Vjcw0KPiBjYWxsaW5nICBpOGtfaW5pdCsweDAvMHgxNTQNCj4g
aW5pdGNhbGwgaThrX2luaXQrMHgwLzB4MTU0IHJldHVybmVkIC0xOSBhZnRlciAwIG1zZWNzDQo+
IGNhbGxpbmcgIG1vZF9pbml0KzB4MC8weDFkYQ0KPiBpbml0Y2FsbCBtb2RfaW5pdCsweDAvMHgx
ZGEgcmV0dXJuZWQgLTE5IGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgcHBkZXZfaW5pdCsweDAv
MHhhNw0KPiBwcGRldjogdXNlci1zcGFjZSBwYXJhbGxlbCBwb3J0IGRyaXZlcg0KPiBpbml0Y2Fs
bCBwcGRldl9pbml0KzB4MC8weGE3IHJldHVybmVkIDAgYWZ0ZXIgMyBtc2Vjcw0KPiBjYWxsaW5n
ICB0bGNsa19pbml0KzB4MC8weDFhZQ0KPiB0ZWxjbGtfaW50ZXJydXAgPSAweGYgbm9uLW1jcGJs
MDAxMCBody4NCj4gaW5pdGNhbGwgdGxjbGtfaW5pdCsweDAvMHgxYWUgcmV0dXJuZWQgLTYgYWZ0
ZXIgMyBtc2Vjcw0KPiBpbml0Y2FsbCB0bGNsa19pbml0KzB4MC8weDFhZSByZXR1cm5lZCB3aXRo
IGVycm9yIGNvZGUgLTYgDQo+IGNhbGxpbmcgIGFncF9pbml0KzB4MC8weDI2DQo+IExpbnV4IGFn
cGdhcnQgaW50ZXJmYWNlIHYwLjEwMw0KPiBpbml0Y2FsbCBhZ3BfaW5pdCsweDAvMHgyNiByZXR1
cm5lZCAwIGFmdGVyIDIgbXNlY3MNCj4gY2FsbGluZyAgYWdwX2ludGVsX2luaXQrMHgwLzB4MjQN
Cj4gaW5pdGNhbGwgYWdwX2ludGVsX2luaXQrMHgwLzB4MjQgcmV0dXJuZWQgMCBhZnRlciAwIG1z
ZWNzDQo+IGNhbGxpbmcgIGFncF92aWFfaW5pdCsweDAvMHgyNA0KPiBpbml0Y2FsbCBhZ3Bfdmlh
X2luaXQrMHgwLzB4MjQgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRf
YXRtZWwrMHgwLzB4MTdjDQo+IGluaXRjYWxsIGluaXRfYXRtZWwrMHgwLzB4MTdjIHJldHVybmVk
IC0xOSBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGRybV9jb3JlX2luaXQrMHgwLzB4ZWYNCj4g
W2RybV0gSW5pdGlhbGl6ZWQgZHJtIDEuMS4wIDIwMDYwODEwDQo+IGluaXRjYWxsIGRybV9jb3Jl
X2luaXQrMHgwLzB4ZWYgcmV0dXJuZWQgMCBhZnRlciAzIG1zZWNzDQo+IGNhbGxpbmcgIHRkZnhf
aW5pdCsweDAvMHhjDQo+IGluaXRjYWxsIHRkZnhfaW5pdCsweDAvMHhjIHJldHVybmVkIDAgYWZ0
ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICByYWRlb25faW5pdCsweDAvMHgxOA0KPiBwY2kgMDAwMDow
MTowMC4wOiBQQ0kgSU5UIEEgLT4gR1NJIDE2IChsZXZlbCwgbG93KSAtPiBJUlEgMTYNCj4gUENJ
OiBTZXR0aW5nIGxhdGVuY3kgdGltZXIgb2YgZGV2aWNlIDAwMDA6MDE6MDAuMCB0byA2NA0KPiBb
ZHJtXSBJbml0aWFsaXplZCByYWRlb24gMS4yOS4wIDIwMDgwNTI4IG9uIG1pbm9yIDANCj4gaW5p
dGNhbGwgcmFkZW9uX2luaXQrMHgwLzB4MTggcmV0dXJuZWQgMCBhZnRlciAxNSBtc2Vjcw0KPiBj
YWxsaW5nICBpODEwX2luaXQrMHgwLzB4MTgNCj4gaW5pdGNhbGwgaTgxMF9pbml0KzB4MC8weDE4
IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBpOTE1X2luaXQrMHgwLzB4MTgN
Cj4gaW5pdGNhbGwgaTkxNV9pbml0KzB4MC8weDE4IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0K
PiBjYWxsaW5nICBzaXNfaW5pdCsweDAvMHgxOA0KPiBpbml0Y2FsbCBzaXNfaW5pdCsweDAvMHgx
OCByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgdmlhX2luaXQrMHgwLzB4MjIN
Cj4gaW5pdGNhbGwgdmlhX2luaXQrMHgwLzB4MjIgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+
IGNhbGxpbmcgIHNlcmlhbDgyNTBfaW5pdCsweDAvMHgxMjINCj4gU2VyaWFsOiA4MjUwLzE2NTUw
IGRyaXZlcjQgcG9ydHMsIElSUSBzaGFyaW5nIGRpc2FibGVkDQo+IO+/vXNlcmlhbDgyNTA6IHR0
eVMwIGF0IEkvTyAweDNmOCAoaXJxID0gNCkgaXMgYSAxNjU1MEENCj4gaW5pdGNhbGwgc2VyaWFs
ODI1MF9pbml0KzB4MC8weDEyMiByZXR1cm5lZCAwIGFmdGVyIDI1MSBtc2Vjcw0KPiBjYWxsaW5n
ICBqc21faW5pdF9tb2R1bGUrMHgwLzB4M2UNCj4gaW5pdGNhbGwganNtX2luaXRfbW9kdWxlKzB4
MC8weDNlIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBwYXJwb3J0X2RlZmF1
bHRfcHJvY19yZWdpc3RlcisweDAvMHgxYg0KPiBpbml0Y2FsbCBwYXJwb3J0X2RlZmF1bHRfcHJv
Y19yZWdpc3RlcisweDAvMHgxYiByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
cGFycG9ydF9wY19pbml0KzB4MC8weDMzOA0KPiBwYXJwb3J0X3BjIDAwOjA4OiByZXBvcnRlZCBi
eSBQbHVnIGFuZCBQbGF5IEFDUEkNCj4gcGFycG9ydDA6IFBDLXN0eWxlIGF0IDB4Mzc4ICgweDc3
OCksIGlycSA3IFtQQ1NQUCgsLi4uKV0NCj4gaW5pdGNhbGwgcGFycG9ydF9wY19pbml0KzB4MC8w
eDMzOCByZXR1cm5lZCAwIGFmdGVyIDEwIG1zZWNzDQo+IGNhbGxpbmcgIHBhcnBvcnRfYXg4ODc5
Nl9pbml0KzB4MC8weGMNCj4gaW5pdGNhbGwgcGFycG9ydF9heDg4Nzk2X2luaXQrMHgwLzB4YyBy
ZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgdG9wb2xvZ3lfc3lzZnNfaW5pdCsw
eDAvMHg0MQ0KPiBpbml0Y2FsbCB0b3BvbG9neV9zeXNmc19pbml0KzB4MC8weDQxIHJldHVybmVk
IDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBsb29wX2luaXQrMHgwLzB4MTk2DQo+IGxvb3A6
IG1vZHVsZSBsb2FkZWQNCj4gaW5pdGNhbGwgbG9vcF9pbml0KzB4MC8weDE5NiByZXR1cm5lZCAw
IGFmdGVyIDIgbXNlY3MNCj4gY2FsbGluZyAgY3BxYXJyYXlfaW5pdCsweDAvMHgyNTENCj4gQ29t
cGFxIFNNQVJUMiBEcml2ZXIgKHYgMi42LjApDQo+IGluaXRjYWxsIGNwcWFycmF5X2luaXQrMHgw
LzB4MjUxIHJldHVybmVkIDAgYWZ0ZXIgMiBtc2Vjcw0KPiBjYWxsaW5nICBjY2lzc19pbml0KzB4
MC8weDI4DQo+IEhQIENJU1MgRHJpdmVyICh2IDMuNi4yMCkNCj4gaW5pdGNhbGwgY2Npc3NfaW5p
dCsweDAvMHgyOCByZXR1cm5lZCAwIGFmdGVyIDIgbXNlY3MNCj4gY2FsbGluZyAgdWJfaW5pdCsw
eDAvMHg2YQ0KPiB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHViDQo+
IGluaXRjYWxsIHViX2luaXQrMHgwLzB4NmEgcmV0dXJuZWQgMCBhZnRlciAzIG1zZWNzDQo+IGNh
bGxpbmcgIHBhc2ljM19iYXNlX2luaXQrMHgwLzB4MTMNCj4gaW5pdGNhbGwgcGFzaWMzX2Jhc2Vf
aW5pdCsweDAvMHgxMyByZXR1cm5lZCAtMTkgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBlMTAw
MF9pbml0X21vZHVsZSsweDAvMHg2MQ0KPiBlMTAwMGU6IEludGVsKFIpIFBSTy8xMDAwIE5ldHdv
cmsgRHJpdmVyIC0gMC4zLjMuMy1rMg0KPiBlMTAwMGU6IENvcHlyaWdodCAoYykgMTk5OS0yMDA4
IEludGVsIENvcnBvcmF0aW9uLg0KPiBlMTAwMGUgMDAwMDowNDowMC4wOiBQQ0kgSU5UIEEgLT4g
R1NJIDE3IChsZXZlbCwgbG93KSAtPiBJUlEgMTcNCj4gUENJOiBTZXR0aW5nIGxhdGVuY3kgdGlt
ZXIgb2YgZGV2aWNlIDAwMDA6MDQ6MDAuMCB0byA2NA0KPiBldGgwOiAoUENJIEV4cHJlc3M6Mi41
R0IvczpXaWR0aCB4MSkgMDA6MTY6NzY6YWI6NmU6ODQNCj4gZXRoMDogSW50ZWwoUikgUFJPLzEw
MDAgTmV0d29yayBDb25uZWN0aW9uDQo+IGV0aDA6IE1BQzogMiwgUEhZOiAyLCBQQkEgTm86IGZm
ZmZmZi0wZmYNCj4gaW5pdGNhbGwgZTEwMDBfaW5pdF9tb2R1bGUrMHgwLzB4NjEgcmV0dXJuZWQg
MCBhZnRlciAxNDEgbXNlY3MNCj4gY2FsbGluZyAgaXhnYl9pbml0X21vZHVsZSsweDAvMHg0Yg0K
PiBJbnRlbChSKSBQUk8vMTBHYkUgTmV0d29yayBEcml2ZXIgLSB2ZXJzaW9uIDEuMC4xMzUtazIt
TkFQSQ0KPiBDb3B5cmlnaHQgKGMpIDE5OTktMjAwOCBJbnRlbCBDb3Jwb3JhdGlvbi4NCj4gaW5p
dGNhbGwgaXhnYl9pbml0X21vZHVsZSsweDAvMHg0YiByZXR1cm5lZCAwIGFmdGVyIDggbXNlY3MN
Cj4gY2FsbGluZyAgaXBnX2luaXRfbW9kdWxlKzB4MC8weDE1DQo+IGluaXRjYWxsIGlwZ19pbml0
X21vZHVsZSsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgY3hn
YjNfaW5pdF9tb2R1bGUrMHgwLzB4MjANCj4gaW5pdGNhbGwgY3hnYjNfaW5pdF9tb2R1bGUrMHgw
LzB4MjAgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIHZjYW5faW5pdF9tb2R1
bGUrMHgwLzB4MzYNCj4gdmNhbjogVmlydHVhbCBDQU4gaW50ZXJmYWNlIGRyaXZlcg0KPiBpbml0
Y2FsbCB2Y2FuX2luaXRfbW9kdWxlKzB4MC8weDM2IHJldHVybmVkIDAgYWZ0ZXIgMyBtc2Vjcw0K
PiBjYWxsaW5nICBhdGwxX2luaXRfbW9kdWxlKzB4MC8weDE1DQo+IGluaXRjYWxsIGF0bDFfaW5p
dF9tb2R1bGUrMHgwLzB4MTUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIHBs
aXBfaW5pdCsweDAvMHg1ZA0KPiBORVQzIFBMSVAgdmVyc2lvbiAyLjQtcGFycG9ydCBnbmlpYmVA
bXJpLmNvLmpwDQo+IHBsaXAwOiBQYXJhbGxlbCBwb3J0IGF0IDB4Mzc4LCB1c2luZyBJUlEgNy4N
Cj4gaW5pdGNhbGwgcGxpcF9pbml0KzB4MC8weDVkIHJldHVybmVkIDAgYWZ0ZXIgOCBtc2Vjcw0K
PiBjYWxsaW5nICBnZW1faW5pdCsweDAvMHgxNQ0KPiBpbml0Y2FsbCBnZW1faW5pdCsweDAvMHgx
NSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgdm9ydGV4X2luaXQrMHgwLzB4
YTkNCj4gaW5pdGNhbGwgdm9ydGV4X2luaXQrMHgwLzB4YTkgcmV0dXJuZWQgMCBhZnRlciAwIG1z
ZWNzDQo+IGNhbGxpbmcgIG5lMmtfcGNpX2luaXQrMHgwLzB4MTUNCj4gaW5pdGNhbGwgbmUya19w
Y2lfaW5pdCsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgZTEw
MF9pbml0X21vZHVsZSsweDAvMHg1Yw0KPiBlMTAwOiBJbnRlbChSKSBQUk8vMTAwIE5ldHdvcmsg
RHJpdmVyLCAzLjUuMjMtazQtTkFQSQ0KPiBlMTAwOiBDb3B5cmlnaHQoYykgMTk5OS0yMDA2IElu
dGVsIENvcnBvcmF0aW9uDQo+IGluaXRjYWxsIGUxMDBfaW5pdF9tb2R1bGUrMHgwLzB4NWMgcmV0
dXJuZWQgMCBhZnRlciA4IG1zZWNzDQo+IGNhbGxpbmcgIHRsYW5fcHJvYmUrMHgwLzB4ZGENCj4g
VGh1bmRlckxBTiBkcml2ZXIgdjEuMTUNCj4gVExBTjogMCBkZXZpY2VzIGluc3RhbGxlZCwgUENJ
OiAwICBFSVNBOiAwDQo+IGluaXRjYWxsIHRsYW5fcHJvYmUrMHgwLzB4ZGEgcmV0dXJuZWQgLTE5
IGFmdGVyIDUgbXNlY3MNCj4gY2FsbGluZyAgZXBpY19pbml0KzB4MC8weDE1DQo+IGluaXRjYWxs
IGVwaWNfaW5pdCsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
c2lzMTkwX2luaXRfbW9kdWxlKzB4MC8weDE1DQo+IGluaXRjYWxsIHNpczE5MF9pbml0X21vZHVs
ZSsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgcjYwNDBfaW5p
dCsweDAvMHgxNQ0KPiBpbml0Y2FsbCByNjA0MF9pbml0KzB4MC8weDE1IHJldHVybmVkIDAgYWZ0
ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICB5ZWxsb3dmaW5faW5pdCsweDAvMHgxNQ0KPiBpbml0Y2Fs
bCB5ZWxsb3dmaW5faW5pdCsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2Fs
bGluZyAgbmF0c2VtaV9pbml0X21vZCsweDAvMHgxNQ0KPiBpbml0Y2FsbCBuYXRzZW1pX2luaXRf
bW9kKzB4MC8weDE1IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBuczgzODIw
X2luaXQrMHgwLzB4MjgNCj4gbnM4MzgyMC5jOiBOYXRpb25hbCBTZW1pY29uZHVjdG9yIERQODM4
MjAgMTAvMTAwLzEwMDAgZHJpdmVyLg0KPiBpbml0Y2FsbCBuczgzODIwX2luaXQrMHgwLzB4Mjgg
cmV0dXJuZWQgMCBhZnRlciA1IG1zZWNzDQo+IGNhbGxpbmcgIHRnM19pbml0KzB4MC8weDE1DQo+
IGluaXRjYWxsIHRnM19pbml0KzB4MC8weDE1IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBj
YWxsaW5nICBibngyX2luaXQrMHgwLzB4MTUNCj4gaW5pdGNhbGwgYm54Ml9pbml0KzB4MC8weDE1
IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBibngyeF9pbml0KzB4MC8weDE1
DQo+IGluaXRjYWxsIGJueDJ4X2luaXQrMHgwLzB4MTUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNz
DQo+IGNhbGxpbmcgIHNrZ2VfaW5pdF9tb2R1bGUrMHgwLzB4MTUNCj4gaW5pdGNhbGwgc2tnZV9p
bml0X21vZHVsZSsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
c2t5Ml9pbml0X21vZHVsZSsweDAvMHg0OQ0KPiBpbml0Y2FsbCBza3kyX2luaXRfbW9kdWxlKzB4
MC8weDQ5IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICByaGluZV9pbml0KzB4
MC8weDM5DQo+IGluaXRjYWxsIHJoaW5lX2luaXQrMHgwLzB4MzkgcmV0dXJuZWQgMCBhZnRlciAw
IG1zZWNzDQo+IGNhbGxpbmcgIHN0YXJmaXJlX2luaXQrMHgwLzB4MTUNCj4gaW5pdGNhbGwgc3Rh
cmZpcmVfaW5pdCsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
bWFydmVsbF9pbml0KzB4MC8weDVlDQo+IGluaXRjYWxsIG1hcnZlbGxfaW5pdCsweDAvMHg1ZSBy
ZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgY2ljYWRhX2luaXQrMHgwLzB4MzUN
Cj4gaW5pdGNhbGwgY2ljYWRhX2luaXQrMHgwLzB4MzUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNz
DQo+IGNhbGxpbmcgIGx4dF9pbml0KzB4MC8weDM1DQo+IGluaXRjYWxsIGx4dF9pbml0KzB4MC8w
eDM1IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBxczY2MTJfaW5pdCsweDAv
MHhjDQo+IGluaXRjYWxsIHFzNjYxMl9pbml0KzB4MC8weGMgcmV0dXJuZWQgMCBhZnRlciAwIG1z
ZWNzDQo+IGNhbGxpbmcgIGlwMTc1Y19pbml0KzB4MC8weGMNCj4gaW5pdGNhbGwgaXAxNzVjX2lu
aXQrMHgwLzB4YyByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgZml4ZWRfbWRp
b19idXNfaW5pdCsweDAvMHg5ZQ0KPiBGaXhlZCBNRElPIEJ1czogcHJvYmVkDQo+IGluaXRjYWxs
IGZpeGVkX21kaW9fYnVzX2luaXQrMHgwLzB4OWUgcmV0dXJuZWQgMCBhZnRlciAyIG1zZWNzDQo+
IGNhbGxpbmcgIHN1bmRhbmNlX2luaXQrMHgwLzB4MTUNCj4gaW5pdGNhbGwgc3VuZGFuY2VfaW5p
dCsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaGFtYWNoaV9p
bml0KzB4MC8weDE1DQo+IGluaXRjYWxsIGhhbWFjaGlfaW5pdCsweDAvMHgxNSByZXR1cm5lZCAw
IGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgbmV0X29sZGRldnNfaW5pdCsweDAvMHg5NQ0KPiBp
bml0Y2FsbCBuZXRfb2xkZGV2c19pbml0KzB4MC8weDk1IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vj
cw0KPiBjYWxsaW5nICBiNDRfaW5pdCsweDAvMHg1OQ0KPiBpbml0Y2FsbCBiNDRfaW5pdCsweDAv
MHg1OSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9uaWMrMHgwLzB4
MTUNCj4gaW5pdGNhbGwgaW5pdF9uaWMrMHgwLzB4MTUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNz
DQo+IGNhbGxpbmcgIHFsM3h4eF9pbml0X21vZHVsZSsweDAvMHgxNQ0KPiBpbml0Y2FsbCBxbDN4
eHhfaW5pdF9tb2R1bGUrMHgwLzB4MTUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxp
bmcgIGR1bW15X2luaXRfbW9kdWxlKzB4MC8weGIyDQo+IGluaXRjYWxsIGR1bW15X2luaXRfbW9k
dWxlKzB4MC8weGIyIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBtYWN2bGFu
X2luaXRfbW9kdWxlKzB4MC8weDQ5DQo+IGluaXRjYWxsIG1hY3ZsYW5faW5pdF9tb2R1bGUrMHgw
LzB4NDkgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGRmeF9pbml0KzB4MC8w
eDE1DQo+IGluaXRjYWxsIGRmeF9pbml0KzB4MC8weDE1IHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vj
cw0KPiBjYWxsaW5nICBydGw4MTM5X2luaXRfbW9kdWxlKzB4MC8weDE1DQo+IGluaXRjYWxsIHJ0
bDgxMzlfaW5pdF9tb2R1bGUrMHgwLzB4MTUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNzDQo+IGNh
bGxpbmcgIGF0cF9pbml0X21vZHVsZSsweDAvMHg5Mw0KPiBhdHAuYzp2MS4wOT1hYyAyMDAyLzEw
LzAxIERvbmFsZCBCZWNrZXIgPGJlY2tlckBzY3lsZC5jb20+DQo+IGluaXRjYWxsIGF0cF9pbml0
X21vZHVsZSsweDAvMHg5MyByZXR1cm5lZCAtMTkgYWZ0ZXIgNSBtc2Vjcw0KPiBjYWxsaW5nICBl
cWxfaW5pdF9tb2R1bGUrMHgwLzB4NWINCj4gRXF1YWxpemVyMjAwMjogU2ltb24gSmFuZXMgKHNp
bW9uQG5jbS5jb20pIGFuZCBEYXZpZCBTLiBNaWxsZXIgKGRhdmVtQHJlZGhhdC5jb20pDQo+IGlu
aXRjYWxsIGVxbF9pbml0X21vZHVsZSsweDAvMHg1YiByZXR1cm5lZCAwIGFmdGVyIDcgbXNlY3MN
Cj4gY2FsbGluZyAgdHVuX2luaXQrMHgwLzB4OTYNCj4gdHVuOiBVbml2ZXJzYWwgVFVOL1RBUCBk
ZXZpY2UgZHJpdmVyLCAxLjYNCj4gdHVuOiAoQykgMTk5OS0yMDA0IE1heCBLcmFzbnlhbnNreSA8
bWF4a0BxdWFsY29tbS5jb20+DQo+IGluaXRjYWxsIHR1bl9pbml0KzB4MC8weDk2IHJldHVybmVk
IDAgYWZ0ZXIgOCBtc2Vjcw0KPiBjYWxsaW5nICByaW9faW5pdCsweDAvMHgxNQ0KPiBpbml0Y2Fs
bCByaW9faW5pdCsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAg
czJpb19zdGFydGVyKzB4MC8weDE1DQo+IGluaXRjYWxsIHMyaW9fc3RhcnRlcisweDAvMHgxNSBy
ZXR1cm5lZCAwIGFmdGVyIDAgbXNlY3MNCj4gY2FsbGluZyAgeGxfcGNpX2luaXQrMHgwLzB4MTUN
Cj4gaW5pdGNhbGwgeGxfcGNpX2luaXQrMHgwLzB4MTUgcmV0dXJuZWQgMCBhZnRlciAwIG1zZWNz
DQo+IGNhbGxpbmcgIGluaXRfZGxjaSsweDAvMHgzNQ0KPiBETENJIGRyaXZlciB2MC4zNSwgNCBK
YW4gMTk5NywgbWlrZS5tY2xhZ2FuQGxpbnV4Lm9yZy4NCj4gaW5pdGNhbGwgaW5pdF9kbGNpKzB4
MC8weDM1IHJldHVybmVkIDAgYWZ0ZXIgNCBtc2Vjcw0KPiBjYWxsaW5nICB1c2JfcnRsODE1MF9p
bml0KzB4MC8weDI4DQo+IHJ0bDgxNTA6IHJ0bDgxNTAgYmFzZWQgdXNiLWV0aGVybmV0IGRyaXZl
ciB2MC42LjIgKDIwMDQvMDgvMjcpDQo+IHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFj
ZSBkcml2ZXIgcnRsODE1MA0KPiBpbml0Y2FsbCB1c2JfcnRsODE1MF9pbml0KzB4MC8weDI4IHJl
dHVybmVkIDAgYWZ0ZXIgOSBtc2Vjcw0KPiBjYWxsaW5nICBjZGNfaW5pdCsweDAvMHgxNQ0KPiB1
c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGNkY19ldGhlcg0KPiBpbml0
Y2FsbCBjZGNfaW5pdCsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDQgbXNlY3MNCj4gY2FsbGlu
ZyAgZG05NjAxX2luaXQrMHgwLzB4MTUNCj4gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJm
YWNlIGRyaXZlciBkbTk2MDENCj4gaW5pdGNhbGwgZG05NjAxX2luaXQrMHgwLzB4MTUgcmV0dXJu
ZWQgMCBhZnRlciA0IG1zZWNzDQo+IGNhbGxpbmcgIHVzYm5ldF9pbml0KzB4MC8weDE1DQo+IHVz
YmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgZ2w2MjBhDQo+IGluaXRjYWxs
IHVzYm5ldF9pbml0KzB4MC8weDE1IHJldHVybmVkIDAgYWZ0ZXIgNCBtc2Vjcw0KPiBjYWxsaW5n
ICBwbHVzYl9pbml0KzB4MC8weDE1DQo+IHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFj
ZSBkcml2ZXIgcGx1c2INCj4gaW5pdGNhbGwgcGx1c2JfaW5pdCsweDAvMHgxNSByZXR1cm5lZCAw
IGFmdGVyIDQgbXNlY3MNCj4gY2FsbGluZyAgcm5kaXNfaW5pdCsweDAvMHgxNQ0KPiB1c2Jjb3Jl
OiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHJuZGlzX2hvc3QNCj4gaW5pdGNhbGwg
cm5kaXNfaW5pdCsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDQgbXNlY3MNCj4gY2FsbGluZyAg
Y2RjX3N1YnNldF9pbml0KzB4MC8weDE1DQo+IHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVy
ZmFjZSBkcml2ZXIgY2RjX3N1YnNldA0KPiBpbml0Y2FsbCBjZGNfc3Vic2V0X2luaXQrMHgwLzB4
MTUgcmV0dXJuZWQgMCBhZnRlciA0IG1zZWNzDQo+IGNhbGxpbmcgIG1jczc4MzBfaW5pdCsweDAv
MHgxNQ0KPiB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIE1PU0NISVAg
dXNiLWV0aGVybmV0IGRyaXZlcg0KPiBpbml0Y2FsbCBtY3M3ODMwX2luaXQrMHgwLzB4MTUgcmV0
dXJuZWQgMCBhZnRlciA1IG1zZWNzDQo+IGNhbGxpbmcgIHVzYm5ldF9pbml0KzB4MC8weDJiDQo+
IGluaXRjYWxsIHVzYm5ldF9pbml0KzB4MC8weDJiIHJldHVybmVkIDAgYWZ0ZXIgMCBtc2Vjcw0K
PiBjYWxsaW5nICBpcHcyMTAwX2luaXQrMHgwLzB4NjUNCj4gaXB3MjEwMDogSW50ZWwoUikgUFJP
L1dpcmVsZXNzIDIxMDAgTmV0d29yayBEcml2ZXIsIGdpdC0xLjIuMg0KPiBpcHcyMTAwOiBDb3B5
cmlnaHQoYykgMjAwMy0yMDA2IEludGVsIENvcnBvcmF0aW9uDQo+IGluaXRjYWxsIGlwdzIxMDBf
aW5pdCsweDAvMHg2NSByZXR1cm5lZCAwIGFmdGVyIDkgbXNlY3MNCj4gY2FsbGluZyAgaXB3X2lu
aXQrMHgwLzB4N2YNCj4gaXB3MjIwMDogSW50ZWwoUikgUFJPL1dpcmVsZXNzIDIyMDAvMjkxNSBO
ZXR3b3JrIERyaXZlciwgMS4yLjJrZA0KPiBpcHcyMjAwOiBDb3B5cmlnaHQoYykgMjAwMy0yMDA2
IEludGVsIENvcnBvcmF0aW9uDQo+IGluaXRjYWxsIGlwd19pbml0KzB4MC8weDdmIHJldHVybmVk
IDAgYWZ0ZXIgOSBtc2Vjcw0KPiBjYWxsaW5nICBpbml0X29yaW5vY28rMHgwLzB4MWQNCj4gb3Jp
bm9jbyAwLjE1IChEYXZpZCBHaWJzb24gPGhlcm1lc0BnaWJzb24uZHJvcGJlYXIuaWQuYXU+LCBQ
YXZlbCBSb3NraW4gPHByb3NraUBnbnUub3JnPiwgZXQgYWwpDQo+IGluaXRjYWxsIGluaXRfb3Jp
bm9jbysweDAvMHgxZCByZXR1cm5lZCAwIGFmdGVyIDggbXNlY3MNCj4gY2FsbGluZyAgaW5pdF9o
ZXJtZXMrMHgwLzB4Mw0KPiBpbml0Y2FsbCBpbml0X2hlcm1lcysweDAvMHgzIHJldHVybmVkIDAg
YWZ0ZXIgMCBtc2Vjcw0KPiBjYWxsaW5nICBvcmlub2NvX3BseF9pbml0KzB4MC8weDJmDQo+IG9y
aW5vY29fcGx4IDAuMTUgKFBhdmVsIFJvc2tpbiA8cHJvc2tpQGdudS5vcmc+LCBEYXZpZCBHaWJz
b24gPGhlcm1lc0BnaWJzb24uZHJvcGJlYXIuaWQuYXU+LCBEYW5pZWwgQmFybG93IDxkYW5AdGVs
ZW50Lm5ldD4pDQo+IGluaXRjYWxsIG9yaW5vY29fcGx4X2luaXQrMHgwLzB4MmYgcmV0dXJuZWQg
MCBhZnRlciAxMCBtc2Vjcw0KPiBjYWxsaW5nICBvcmlub2NvX25vcnRlbF9pbml0KzB4MC8weDJm
DQo+IG9yaW5vY29fbm9ydGVsIDAuMTUgKFRvYmlhcyBIb2ZmbWFubiAmIENocmlzdG9waCBKdW5n
ZWdnZXIgPGRpc2Rvc0B0cmF1bTQwNC5kZT4pDQo+IGluaXRjYWxsIG9yaW5vY29fbm9ydGVsX2lu
aXQrMHgwLzB4MmYgcmV0dXJuZWQgMCBhZnRlciA2IG1zZWNzDQo+IGNhbGxpbmcgIGFpcm9faW5p
dF9tb2R1bGUrMHgwLzB4ZTMNCj4gYWlybygpOiBQcm9iaW5nIGZvciBQQ0kgYWRhcHRlcnMNCj4g
YWlybygpOiBGaW5pc2hlZCBwcm9iaW5nIGZvciBQQ0kgYWRhcHRlcnMNCj4gaW5pdGNhbGwgYWly
b19pbml0X21vZHVsZSsweDAvMHhlMyByZXR1cm5lZCAwIGFmdGVyIDYgbXNlY3MNCj4gY2FsbGlu
ZyAgcHJpc201NF9tb2R1bGVfaW5pdCsweDAvMHgzNg0KPiBMb2FkZWQgcHJpc201NCBkcml2ZXIs
IHZlcnNpb24gMS4yDQo+IGluaXRjYWxsIHByaXNtNTRfbW9kdWxlX2luaXQrMHgwLzB4MzYgcmV0
dXJuZWQgMCBhZnRlciAzIG1zZWNzDQo+IGNhbGxpbmcgIGI0M19pbml0KzB4MC8weDQyDQo+IEJy
b2FkY29tIDQzeHggZHJpdmVyIGxvYWRlZCBbIEZlYXR1cmVzOiBQTCwgRmlybXdhcmUtSUQ6IEZX
MTMgXQ0KPiBpbml0Y2FsbCBiNDNfaW5pdCsweDAvMHg0MiByZXR1cm5lZCAwIGFmdGVyIDUgbXNl
Y3MNCj4gY2FsbGluZyAgdXNiX2luaXQrMHgwLzB4YWQNCj4gemQxMjExcncgdXNiX2luaXQoKQ0K
PiB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHpkMTIxMXJ3DQo+IHpk
MTIxMXJ3IGluaXRpYWxpemVkDQo+IGluaXRjYWxsIHVzYl9pbml0KzB4MC8weGFkIHJldHVybmVk
IDAgYWZ0ZXIgOCBtc2Vjcw0KPiBjYWxsaW5nICBybmRpc193bGFuX2luaXQrMHgwLzB4MTUNCj4g
dXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBybmRpc193bGFuDQo+IGlu
aXRjYWxsIHJuZGlzX3dsYW5faW5pdCsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVyIDQgbXNlY3MN
Cj4gY2FsbGluZyAgemQxMjAxX2luaXQrMHgwLzB4MTUNCj4gdXNiY29yZTogcmVnaXN0ZXJlZCBu
ZXcgaW50ZXJmYWNlIGRyaXZlciB6ZDEyMDENCj4gaW5pdGNhbGwgemQxMjAxX2luaXQrMHgwLzB4
MTUgcmV0dXJuZWQgMCBhZnRlciA0IG1zZWNzDQo+IGNhbGxpbmcgIGxic19pbml0X21vZHVsZSsw
eDAvMHgzOA0KPiBpbml0Y2FsbCBsYnNfaW5pdF9tb2R1bGUrMHgwLzB4MzggcmV0dXJuZWQgMCBh
ZnRlciAwIG1zZWNzDQo+IGNhbGxpbmcgIGlmX3NkaW9faW5pdF9tb2R1bGUrMHgwLzB4MmQNCj4g
bGliZXJ0YXNfc2RpbzogTGliZXJ0YXMgU0RJTyBkcml2ZXINCj4gbGliZXJ0YXNfc2RpbzogQ29w
eXJpZ2h0IFBpZXJyZSBPc3NtYW4NCj4gaW5pdGNhbGwgaWZfc2Rpb19pbml0X21vZHVsZSsweDAv
MHgyZCByZXR1cm5lZCAwIGFmdGVyIDYgbXNlY3MNCj4gY2FsbGluZyAgcnRsODE4MF9pbml0KzB4
MC8weDE1DQo+IGluaXRjYWxsIHJ0bDgxODBfaW5pdCsweDAvMHgxNSByZXR1cm5lZCAwIGFmdGVy
IDAgbXNlY3MNCj4gY2FsbGluZyAgcnRsODE4N19pbml0KzB4MC8weDE1DQo+IHVzYmNvcmU6IHJl
Z2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgcnRsODE4Nw0KPiBpbml0Y2FsbCBydGw4MTg3
X2luaXQrMHgwLzB4MTUgcmV0dXJuZWQgMCBhZnRlciA0IG1zZWNzDQo+IGNhbGxpbmcgIGl3bDQ5
NjVfaW5pdCsweDAvMHg2Yw0KPiBpd2w0OTY1OiBJbnRlbChSKSBXaXJlbGVzcyBXaUZpIExpbmsg
NDk2NUFHTiBkcml2ZXIgZm9yIExpbnV4LCAxLjMuMjdrZA0KPiBpd2w0OTY1OiBDb3B5cmlnaHQo
YykgMjAwMy0yMDA4IEludGVsIENvcnBvcmF0aW9uDQo+IGluaXRjYWxsIGl3bDQ5NjVfaW5pdCsw
eDAvMHg2YyByZXR1cm5lZCAwIGFmdGVyIDEwIG1zZWNzDQo+IGNhbGxpbmcgIGluaXRfbWFjODAy
MTFfaHdzaW0rMHgwLzB4MmYwDQo+IG1hYzgwMjExX2h3c2ltOiBJbml0aWFsaXppbmcgcmFkaW8g
MA0KPiBwaHkwOiBGYWlsZWQgdG8gc2VsZWN0IHJhdGUgY29udHJvbCBhbGdvcml0aG0NCj4gcGh5
MDogRmFpbGVkIHRvIGluaXRpYWxpemUgcmF0ZSBjb250cm9sIGFsZ29yaXRobQ0KPiBtYWM4MDIx
MV9od3NpbTogaWVlZTgwMjExX3JlZ2lzdGVyX2h3IGZhaWxlZCAoLTIpDQo+IEJVRzogdW5hYmxl
IHRvIGhhbmRsZSBrZXJuZWwgTlVMTCBwb2ludGVyIGRlcmVmZXJlbmNlIGF0IDAwMDAwMDAwMDAw
MDAzNzANCj4gSVA6IFs8ZmZmZmZmZmY4MDhiZTBjMj5dIHJvbGxiYWNrX3JlZ2lzdGVyZWQrMHgz
Ny8weGZiDQo+IFBHRCAwIA0KPiBPb3BzOiAwMDAwIFsxXSBTTVAgDQo+IENQVSAxIA0KPiBQaWQ6
IDEsIGNvbW06IHN3YXBwZXIgTm90IHRhaW50ZWQgMi42LjI2LTA1MjUzLWcxNGIzOTVlICMyMTMw
OA0KPiBSSVA6IDAwMTA6WzxmZmZmZmZmZjgwOGJlMGMyPl0gIFs8ZmZmZmZmZmY4MDhiZTBjMj5d
IHJvbGxiYWNrX3JlZ2lzdGVyZWQrMHgzNy8weGZiDQo+IFJTUDogMDAxODpmZmZmODgwMDNmODNm
ZTAwICBFRkxBR1M6IDAwMDEwMjEyDQo+IFJBWDogMDAwMDAwMDAwMDAwMDAwMSBSQlg6IDAwMDAw
MDAwMDAwMDAwMDAgUkNYOiBmZmZmODgwMDNkMDc4ZWQ4DQo+IFJEWDogZmZmZmZmZmY4MDk1ZGUz
ZCBSU0k6IDAwMDAwMDAwMDAwMDAwNDYgUkRJOiAwMDAwMDAwMDAwMDAwMDAwDQo+IFJCUDogMDAw
MDAwMDAwMDAwMDAwMCBSMDg6IDAwMDAwMDAwMDAwMDAwMDAgUjA5OiBmZmZmODgwMDA0MmZhY2Mw
DQo+IFIxMDogMDAwMDAwMDAwMDAwMDAwMCBSMTE6IGZmZmZmZmZmODA0MDIxYWUgUjEyOiAwMDAw
MDAwMDAwMDAwMDAwDQo+IFIxMzogZmZmZjg4MDAzZDA3OTlhMCBSMTQ6IDAwMDAwMDAwMDAwMDAw
MDAgUjE1OiAwMDAwMDAwMDAwMDAwMDA4DQo+IEZTOiAgMDAwMDAwMDAwMDAwMDAwMCgwMDAwKSBH
UzpmZmZmODgwMDNmODI5MTYwKDAwMDApIGtubEdTOjAwMDAwMDAwMDAwMDAwMDANCj4gQ1M6ICAw
MDEwIERTOiAwMDE4IEVTOiAwMDE4IENSMDogMDAwMDAwMDA4MDA1MDAzYg0KPiBDUjI6IDAwMDAw
MDAwMDAwMDAzNzAgQ1IzOiAwMDAwMDAwMDAwMjAxMDAwIENSNDogMDAwMDAwMDAwMDAwMDZlMA0K
PiBEUjA6IDAwMDAwMDAwMDAwMDAwMDAgRFIxOiAwMDAwMDAwMDAwMDAwMDAwIERSMjogMDAwMDAw
MDAwMDAwMDAwMA0KPiBEUjM6IDAwMDAwMDAwMDAwMDAwMDAgRFI2OiAwMDAwMDAwMGZmZmYwZmYw
IERSNzogMDAwMDAwMDAwMDAwMDQwMA0KPiBQcm9jZXNzIHN3YXBwZXIgKHBpZDogMSwgdGhyZWFk
aW5mbyBmZmZmODgwMDNmODNlMDAwLCB0YXNrIGZmZmY4ODAwM2Y4MjQwMDApDQo+IFN0YWNrOiAg
MDAwMDAwMDAwMDAwMDAwMCBmZmZmZmZmZjgwOGJlMWI4IGZmZmY4ODAwM2QwNzgyYzAgZmZmZmZm
ZmY4MDk2MjFhYg0KPiAgZmZmZjg4MDAzZDA3OTlhMCBmZmZmZmZmZjgwNWYxZTVkIDAwMDAwMDAw
ZmZmZmZmZmUgZmZmZjg4MDAzZDA3ODJjMA0KPiAgZmZmZjg4MDAzZDA3OTllMCBmZmZmZmZmZjgx
MWU4M2U0IGZmZmY4ODAwM2Y4M2ZlYTAgZmZmZmZmZmY4MDI0OGYxMg0KPiBDYWxsIFRyYWNlOg0K
PiAgWzxmZmZmZmZmZjgwOGJlMWI4Pl0gdW5yZWdpc3Rlcl9uZXRkZXZpY2UrMHgzMi8weDcxDQo+
ICBbPGZmZmZmZmZmODA5NjIxYWI+XSBpZWVlODAyMTFfdW5yZWdpc3Rlcl9odysweDM1LzB4ZDQN
Cj4gIFs8ZmZmZmZmZmY4MDVmMWU1ZD5dIG1hYzgwMjExX2h3c2ltX2ZyZWUrMHgxZC8weDZhDQo+
ICBbPGZmZmZmZmZmODExZTgzZTQ+XSBpbml0X21hYzgwMjExX2h3c2ltKzB4MmRmLzB4MmYwDQo+
ICBbPGZmZmZmZmZmODAyNDhmMTI+XSBnZXRuc3RpbWVvZmRheSsweDM4LzB4OTUNCj4gIFs8ZmZm
ZmZmZmY4MDI0NmY0OD5dIGt0aW1lX2dldF90cysweDIxLzB4NDkNCj4gIFs8ZmZmZmZmZmY4MTFl
ODEwNT5dIGluaXRfbWFjODAyMTFfaHdzaW0rMHgwLzB4MmYwDQo+ICBbPGZmZmZmZmZmODExY2I4
Yzk+XSBrZXJuZWxfaW5pdCsweDE0My8weDI5NQ0KPiAgWzxmZmZmZmZmZjgwMjA5Yjc5Pl0gX19z
d2l0Y2hfdG8rMHhiNi8weDNiYg0KPiAgWzxmZmZmZmZmZjgwMjBjMzk5Pl0gY2hpbGRfcmlwKzB4
YS8weDExDQo+ICBbPGZmZmZmZmZmODExY2I3ODY+XSBrZXJuZWxfaW5pdCsweDAvMHgyOTUNCj4g
IFs8ZmZmZmZmZmY4MDIwYzM4Zj5dIGNoaWxkX3JpcCsweDAvMHgxMQ0KPiANCj4gDQo+IENvZGU6
IDA0IDBmIDBiIGViIGZlIGU4IGUzIDc4IDAwIDAwIDg1IGMwIDc1IDFkIGJhIGNlIDBlIDAwIDAw
IDQ4IGM3IGM2IGI4IGUxIGQ4IDgwIDQ4IGM3IGM3IDNiIDZjIGNlIDgwIGU4IDI0IDY1IDk3IGZm
IGU4IDBlIGY4IDk0IGZmIDw4Yj4gODMgNzAgMDMgMDAgMDAgODUgYzAgNzUgMjQgNDggODkgZGUg
NDggODkgZGEgNDggYzcgYzcgZTQgZTQgDQo+IFJJUCAgWzxmZmZmZmZmZjgwOGJlMGMyPl0gcm9s
bGJhY2tfcmVnaXN0ZXJlZCsweDM3LzB4ZmINCj4gIFJTUCA8ZmZmZjg4MDAzZjgzZmUwMD4NCj4g
Q1IyOiAwMDAwMDAwMDAwMDAwMzcwDQo+IC0tLVsgZW5kIHRyYWNlIDRiMDFiNGMxYTk3ZGJlYmYg
XS0tLQ0KPiBLZXJuZWwgcGFuaWMgLSBub3Qgc3luY2luZzogQXR0ZW1wdGVkIHRvIGtpbGwgaW5p
dCENCg==
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
[not found] ` <20080721150446.GA17746@elte.hu>
@ 2008-07-21 15:24 ` David Miller
2008-07-21 18:18 ` Ian Schram
0 siblings, 1 reply; 83+ messages in thread
From: David Miller @ 2008-07-21 15:24 UTC (permalink / raw)
To: mingo; +Cc: torvalds, akpm, netdev, linux-kernel, linux-wireless
From: Ingo Molnar <mingo@elte.hu>
Date: Mon, 21 Jul 2008 17:04:48 +0200
[ Adding linux-wireless CC, again, Ingo please retain it for
followups, thanks! ]
> * Ingo Molnar <mingo@elte.hu> wrote:
>
> > > Pid: 1, comm: swapper Not tainted 2.6.26-tip-00013-g6de15c6-dirty #21290
> >
> > some more information: find below the same crash with vanilla
> > linus/master and no extra patches. The crash site is:
>
> a 32-bit testbox just triggered the same crash too:
>
> calling init_mac80211_hwsim+0x0/0x310
> mac80211_hwsim: Initializing radio 0
> phy0: Failed to select rate control algorithm
> phy0: Failed to initialize rate control algorithm
> mac80211_hwsim: ieee80211_register_hw failed (-2)
> BUG: unable to handle kernel NULL pointer dereference at 00000298
> IP: [<c06efb98>] rollback_registered+0x28/0x120
> *pdpt = 0000000000bc9001 *pde = 0000000000000000
> Oops: 0000 [#1] PREEMPT SMP
>
> and that system has no wireless so i guess it's just some unregister
> inbalance kind of init/deinit buglet.
>
> Ingo
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 15:24 ` David Miller
@ 2008-07-21 18:18 ` Ian Schram
2008-07-21 19:06 ` Ingo Molnar
0 siblings, 1 reply; 83+ messages in thread
From: Ian Schram @ 2008-07-21 18:18 UTC (permalink / raw)
To: David Miller; +Cc: mingo, torvalds, akpm, netdev, linux-kernel, wireless, j
I was looking at this out of interest, but I'm in no way familiar with the code.
Looks to me that the error handling code in mac80211_hwsim is awkward. Which
leads to it calling ieee80211_unregister_hw even when ieee80211_register_hw failed.
The function has a for loop where it generates all simulated radios. when something
fails, the error handling will call mac80211_hwsim_free which frees all simulated radios
who's pointer isn't zero. However the information stored is insufficient to determine
whether or not the call to ieee80211_register_hw succeeded or not for a specific radio.
The included patch makes init_mac80211_hwsim clean up the current simulated radio,
and then calls into mac80211_hwsim_free to clean up all the radios that did succeed.
This however doesn't explain why the rate control registration failed.. build tested this,
but had some problems reproducing the original problem.
signed-off-by: Ian Schram <ischram@telenet.be>
--- a/mac80211_hwsim.c 2008-07-21 18:48:38.000000000 +0200
+++ b/mac80211_hwsim.c 2008-07-21 19:31:44.000000000 +0200
@@ -364,8 +364,7 @@ static void mac80211_hwsim_free(void)
struct mac80211_hwsim_data *data;
data = hwsim_radios[i]->priv;
ieee80211_unregister_hw(hwsim_radios[i]);
- if (!IS_ERR(data->dev))
- device_unregister(data->dev);
+ device_unregister(data->dev);
ieee80211_free_hw(hwsim_radios[i]);
}
}
@@ -437,7 +436,7 @@ static int __init init_mac80211_hwsim(vo
"mac80211_hwsim: device_create_drvdata "
"failed (%ld)\n", PTR_ERR(data->dev));
err = -ENOMEM;
- goto failed;
+ goto failed_drvdata;
}
data->dev->driver = &mac80211_hwsim_driver;
@@ -461,7 +460,7 @@ static int __init init_mac80211_hwsim(vo
if (err < 0) {
printk(KERN_DEBUG "mac80211_hwsim: "
"ieee80211_register_hw failed (%d)\n", err);
- goto failed;
+ goto failed_hw;
}
printk(KERN_DEBUG "%s: hwaddr %s registered\n",
@@ -479,9 +478,9 @@ static int __init init_mac80211_hwsim(vo
rtnl_lock();
err = dev_alloc_name(hwsim_mon, hwsim_mon->name);
- if (err < 0) {
+ if (err < 0)
goto failed_mon;
- }
+
err = register_netdevice(hwsim_mon);
if (err < 0)
@@ -494,7 +493,14 @@ static int __init init_mac80211_hwsim(vo
failed_mon:
rtnl_unlock();
free_netdev(hwsim_mon);
+ mac80211_hwsim_free();
+ return err;
+failed_hw:
+ device_unregister(data->dev);
+failed_drvdata:
+ ieee80211_free_hw(hw);
+ hwsim_radios[i] = 0;
failed:
mac80211_hwsim_free();
return err;
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 18:18 ` Ian Schram
@ 2008-07-21 19:06 ` Ingo Molnar
2008-07-21 19:13 ` Larry Finger
0 siblings, 1 reply; 83+ messages in thread
From: Ingo Molnar @ 2008-07-21 19:06 UTC (permalink / raw)
To: Ian Schram
Cc: David Miller, torvalds, akpm, netdev, linux-kernel, wireless, j
* Ian Schram <ischram@telenet.be> wrote:
> I was looking at this out of interest, but I'm in no way familiar with
> the code.
thanks Ian for the patch, i'll test it.
Note that it was whitespace damaged, find below a tidied up version of
the patch that i've applied to tip/out-of-tree.
Ingo
----------------------->
commit 2f77dd3a3b5c3a27298fa0a09d8703c09c633fc6
Author: Ian Schram <ischram@telenet.be>
Date: Mon Jul 21 20:18:25 2008 +0200
mac80211_hwsim.c: fix: BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
I was looking at this out of interest, but I'm in no way familiar with
the code.
Looks to me that the error handling code in mac80211_hwsim is awkward.
Which leads to it calling ieee80211_unregister_hw even when
ieee80211_register_hw failed.
The function has a for loop where it generates all simulated radios.
when something fails, the error handling will call mac80211_hwsim_free
which frees all simulated radios who's pointer isn't zero. However the
information stored is insufficient to determine whether or not the call
to ieee80211_register_hw succeeded or not for a specific radio. The
included patch makes init_mac80211_hwsim clean up the current simulated
radio, and then calls into mac80211_hwsim_free to clean up all the
radios that did succeed.
This however doesn't explain why the rate control registration failed..
build tested this, but had some problems reproducing the original
problem.
Signed-off-by: Ian Schram <ischram@telenet.be>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
drivers/net/wireless/mac80211_hwsim.c | 18 ++++++++++++------
1 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
index 913dc9f..5816230 100644
--- a/drivers/net/wireless/mac80211_hwsim.c
+++ b/drivers/net/wireless/mac80211_hwsim.c
@@ -364,8 +364,7 @@ static void mac80211_hwsim_free(void)
struct mac80211_hwsim_data *data;
data = hwsim_radios[i]->priv;
ieee80211_unregister_hw(hwsim_radios[i]);
- if (!IS_ERR(data->dev))
- device_unregister(data->dev);
+ device_unregister(data->dev);
ieee80211_free_hw(hwsim_radios[i]);
}
}
@@ -437,7 +436,7 @@ static int __init init_mac80211_hwsim(void)
"mac80211_hwsim: device_create_drvdata "
"failed (%ld)\n", PTR_ERR(data->dev));
err = -ENOMEM;
- goto failed;
+ goto failed_drvdata;
}
data->dev->driver = &mac80211_hwsim_driver;
@@ -461,7 +460,7 @@ static int __init init_mac80211_hwsim(void)
if (err < 0) {
printk(KERN_DEBUG "mac80211_hwsim: "
"ieee80211_register_hw failed (%d)\n", err);
- goto failed;
+ goto failed_hw;
}
printk(KERN_DEBUG "%s: hwaddr %s registered\n",
@@ -479,9 +478,9 @@ static int __init init_mac80211_hwsim(void)
rtnl_lock();
err = dev_alloc_name(hwsim_mon, hwsim_mon->name);
- if (err < 0) {
+ if (err < 0)
goto failed_mon;
- }
+
err = register_netdevice(hwsim_mon);
if (err < 0)
@@ -494,7 +493,14 @@ static int __init init_mac80211_hwsim(void)
failed_mon:
rtnl_unlock();
free_netdev(hwsim_mon);
+ mac80211_hwsim_free();
+ return err;
+failed_hw:
+ device_unregister(data->dev);
+failed_drvdata:
+ ieee80211_free_hw(hw);
+ hwsim_radios[i] = 0;
failed:
mac80211_hwsim_free();
return err;
^ permalink raw reply related [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 19:06 ` Ingo Molnar
@ 2008-07-21 19:13 ` Larry Finger
2008-07-21 19:34 ` Ingo Molnar
0 siblings, 1 reply; 83+ messages in thread
From: Larry Finger @ 2008-07-21 19:13 UTC (permalink / raw)
To: Ingo Molnar
Cc: Ian Schram, David Miller, torvalds, akpm, netdev, linux-kernel,
wireless, j
Ingo Molnar wrote:
> * Ian Schram <ischram@telenet.be> wrote:
>
>> I was looking at this out of interest, but I'm in no way familiar with
>> the code.
>
> thanks Ian for the patch, i'll test it.
>
> Note that it was whitespace damaged, find below a tidied up version of
> the patch that i've applied to tip/out-of-tree.
>
> Ingo
This patch may be needed to fix error handling in the hw_sim code, but I get the
crash even with that code disabled. I'm currently bisecting to find the culprit.
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 19:13 ` Larry Finger
@ 2008-07-21 19:34 ` Ingo Molnar
2008-07-21 19:43 ` Larry Finger
0 siblings, 1 reply; 83+ messages in thread
From: Ingo Molnar @ 2008-07-21 19:34 UTC (permalink / raw)
To: Larry Finger
Cc: Ian Schram, David Miller, torvalds, akpm, netdev, linux-kernel,
wireless, j
* Larry Finger <Larry.Finger@lwfinger.net> wrote:
> Ingo Molnar wrote:
>> * Ian Schram <ischram@telenet.be> wrote:
>>
>>> I was looking at this out of interest, but I'm in no way familiar
>>> with the code.
>>
>> thanks Ian for the patch, i'll test it.
>>
>> Note that it was whitespace damaged, find below a tidied up version of
>> the patch that i've applied to tip/out-of-tree.
>>
>> Ingo
>
> This patch may be needed to fix error handling in the hw_sim code, but
> I get the crash even with that code disabled. I'm currently bisecting
> to find the culprit.
ok. I just reactivated CONFIG_MAC80211_HWSIM, applied Ian's fix and the
crash went away:
calling iwl4965_init+0x0/0x6c
iwl4965: Intel(R) Wireless WiFi Link 4965AGN driver for Linux, 1.3.27kd
iwl4965: Copyright(c) 2003-2008 Intel Corporation
initcall iwl4965_init+0x0/0x6c returned 0 after 10 msecs
calling init_mac80211_hwsim+0x0/0x31c
mac80211_hwsim: Initializing radio 0
PM: Adding info for No Bus:hwsim0
PM: Adding info for No Bus:phy0
PM: Adding info for No Bus:wmaster0
phy0: Failed to select rate control algorithm
phy0: Failed to initialize rate control algorithm
PM: Removing info for No Bus:wmaster0
PM: Removing info for No Bus:phy0
mac80211_hwsim: ieee80211_register_hw failed (-2)
PM: Removing info for No Bus:hwsim0
initcall init_mac80211_hwsim+0x0/0x31c returned -2 after 58 msecs
initcall init_mac80211_hwsim+0x0/0x31c returned with error code -2
calling dmfe_init_module+0x0/0xea
dmfe: Davicom DM9xxx net driver, version 1.36.4 (2002-01-17)
initcall dmfe_init_module+0x0/0xea returned 0 after 5 msecs
So at least as far as the init_mac80211_hwsim() deinit crash goes:
Tested-by: Ingo Molnar <mingo@elte.hu>
Ingo
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 19:34 ` Ingo Molnar
@ 2008-07-21 19:43 ` Larry Finger
2008-07-21 19:47 ` Linus Torvalds
2008-07-21 20:21 ` David Miller
0 siblings, 2 replies; 83+ messages in thread
From: Larry Finger @ 2008-07-21 19:43 UTC (permalink / raw)
To: Ingo Molnar
Cc: Ian Schram, David Miller, torvalds, akpm, netdev, linux-kernel,
wireless, j
Ingo Molnar wrote:
> * Larry Finger <Larry.Finger@lwfinger.net> wrote:
>
>> Ingo Molnar wrote:
>>> * Ian Schram <ischram@telenet.be> wrote:
>>>
>>>> I was looking at this out of interest, but I'm in no way familiar
>>>> with the code.
>>> thanks Ian for the patch, i'll test it.
>>>
>>> Note that it was whitespace damaged, find below a tidied up version of
>>> the patch that i've applied to tip/out-of-tree.
>>>
>>> Ingo
>> This patch may be needed to fix error handling in the hw_sim code, but
>> I get the crash even with that code disabled. I'm currently bisecting
>> to find the culprit.
>
> ok. I just reactivated CONFIG_MAC80211_HWSIM, applied Ian's fix and the
> crash went away:
>
> calling iwl4965_init+0x0/0x6c
> iwl4965: Intel(R) Wireless WiFi Link 4965AGN driver for Linux, 1.3.27kd
> iwl4965: Copyright(c) 2003-2008 Intel Corporation
> initcall iwl4965_init+0x0/0x6c returned 0 after 10 msecs
> calling init_mac80211_hwsim+0x0/0x31c
> mac80211_hwsim: Initializing radio 0
> PM: Adding info for No Bus:hwsim0
> PM: Adding info for No Bus:phy0
> PM: Adding info for No Bus:wmaster0
> phy0: Failed to select rate control algorithm
> phy0: Failed to initialize rate control algorithm
> PM: Removing info for No Bus:wmaster0
> PM: Removing info for No Bus:phy0
> mac80211_hwsim: ieee80211_register_hw failed (-2)
> PM: Removing info for No Bus:hwsim0
> initcall init_mac80211_hwsim+0x0/0x31c returned -2 after 58 msecs
> initcall init_mac80211_hwsim+0x0/0x31c returned with error code -2
> calling dmfe_init_module+0x0/0xea
> dmfe: Davicom DM9xxx net driver, version 1.36.4 (2002-01-17)
> initcall dmfe_init_module+0x0/0xea returned 0 after 5 msecs
>
> So at least as far as the init_mac80211_hwsim() deinit crash goes:
>
> Tested-by: Ingo Molnar <mingo@elte.hu>
Yes, I'm chasing a distinct bug. The header for mine is
Jul 21 12:19:37 larrylap kernel: kernel BUG at net/core/dev.c:1328!
Jul 21 12:19:37 larrylap kernel: invalid opcode: 0000 [1] SMP
Jul 21 12:19:37 larrylap kernel: CPU 0
Jul 21 12:19:37 larrylap kernel: Modules linked in: af_packet rfkill_input nfs
lockd nfs_acl sunrpc cpufreq_conservative cpu
freq_userspace cpufreq_powersave powernow_k8 fuse loop dm_mod arc4 ecb
crypto_blkcipher b43 firmware_class rfkill mac80211 c
fg80211 snd_hda_intel snd_pcm snd_timer led_class snd k8temp input_polldev
sr_mod soundcore button battery hwmon cdrom force
deth ac serio_raw ssb snd_page_alloc sg ehci_hcd sd_mod ohci_hcd usbcore edd fan
thermal processor ext3 mbcache jbd pata_amd
ahci libata scsi_mod dock
Jul 21 12:19:37 larrylap kernel: Pid: 2057, comm: b43 Not tainted
2.6.26-Linus-git-05253-g14b395e #1
Jul 21 12:19:37 larrylap kernel: RIP: 0010:[<ffffffff8039ec4d>]
[<ffffffff8039ec4d>] __netif_schedule+0x12/0x75
Jul 21 12:19:37 larrylap kernel: RSP: 0000:ffff8800b9ae1de0 EFLAGS: 00010246
With an invalid opcode, mine is likely due to stack corruption.
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 19:43 ` Larry Finger
@ 2008-07-21 19:47 ` Linus Torvalds
2008-07-21 20:15 ` David Miller
2008-07-21 20:28 ` Larry Finger
2008-07-21 20:21 ` David Miller
1 sibling, 2 replies; 83+ messages in thread
From: Linus Torvalds @ 2008-07-21 19:47 UTC (permalink / raw)
To: Larry Finger
Cc: Ingo Molnar, Ian Schram, David Miller, akpm, netdev, linux-kernel,
wireless, j
On Mon, 21 Jul 2008, Larry Finger wrote:
>
> Yes, I'm chasing a distinct bug. The header for mine is
>
> Jul 21 12:19:37 larrylap kernel: kernel BUG at net/core/dev.c:1328!
Ok, that one is fixed now in my tree. Or at least it's turned into a
warning, so the machine should work.
> With an invalid opcode, mine is likely due to stack corruption.
No, invalid opcode is because we use the "ud2" instruction for BUG(),
which causes an invalid op exception. So any BUG[_ON]() will always cause
that on x86.
Linus
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 19:47 ` Linus Torvalds
@ 2008-07-21 20:15 ` David Miller
2008-07-21 20:28 ` Larry Finger
1 sibling, 0 replies; 83+ messages in thread
From: David Miller @ 2008-07-21 20:15 UTC (permalink / raw)
To: torvalds
Cc: Larry.Finger, mingo, ischram, akpm, netdev, linux-kernel,
linux-wireless, j
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Mon, 21 Jul 2008 12:47:58 -0700 (PDT)
> On Mon, 21 Jul 2008, Larry Finger wrote:
> > With an invalid opcode, mine is likely due to stack corruption.
>
> No, invalid opcode is because we use the "ud2" instruction for BUG(),
> which causes an invalid op exception. So any BUG[_ON]() will always cause
> that on x86.
Is there really no more backtrace from that crash message?
It would tell me what driver it's in.
There is some "comm: b43" in the log so I'll check that one.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 19:43 ` Larry Finger
2008-07-21 19:47 ` Linus Torvalds
@ 2008-07-21 20:21 ` David Miller
2008-07-21 20:38 ` Larry Finger
1 sibling, 1 reply; 83+ messages in thread
From: David Miller @ 2008-07-21 20:21 UTC (permalink / raw)
To: Larry.Finger
Cc: mingo, ischram, torvalds, akpm, netdev, linux-kernel,
linux-wireless, j
From: Larry Finger <Larry.Finger@lwfinger.net>
Date: Mon, 21 Jul 2008 14:43:34 -0500
> Jul 21 12:19:37 larrylap kernel: kernel BUG at net/core/dev.c:1328!
> Jul 21 12:19:37 larrylap kernel: invalid opcode: 0000 [1] SMP
> Jul 21 12:19:37 larrylap kernel: CPU 0
> Jul 21 12:19:37 larrylap kernel: Modules linked in: af_packet rfkill_input nfs
> lockd nfs_acl sunrpc cpufreq_conservative cpu
> freq_userspace cpufreq_powersave powernow_k8 fuse loop dm_mod arc4 ecb
> crypto_blkcipher b43 firmware_class rfkill mac80211 c
> fg80211 snd_hda_intel snd_pcm snd_timer led_class snd k8temp input_polldev
> sr_mod soundcore button battery hwmon cdrom force
> deth ac serio_raw ssb snd_page_alloc sg ehci_hcd sd_mod ohci_hcd usbcore edd fan
> thermal processor ext3 mbcache jbd pata_amd
> ahci libata scsi_mod dock
> Jul 21 12:19:37 larrylap kernel: Pid: 2057, comm: b43 Not tainted
> 2.6.26-Linus-git-05253-g14b395e #1
> Jul 21 12:19:37 larrylap kernel: RIP: 0010:[<ffffffff8039ec4d>]
> [<ffffffff8039ec4d>] __netif_schedule+0x12/0x75
> Jul 21 12:19:37 larrylap kernel: RSP: 0000:ffff8800b9ae1de0 EFLAGS: 00010246
>
> With an invalid opcode, mine is likely due to stack corruption.
No further backtrace? That will tell us what driver is causing
this.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 19:47 ` Linus Torvalds
2008-07-21 20:15 ` David Miller
@ 2008-07-21 20:28 ` Larry Finger
1 sibling, 0 replies; 83+ messages in thread
From: Larry Finger @ 2008-07-21 20:28 UTC (permalink / raw)
To: Linus Torvalds
Cc: Ingo Molnar, Ian Schram, David Miller, akpm, netdev, linux-kernel,
wireless, j
Linus Torvalds wrote:
>
> On Mon, 21 Jul 2008, Larry Finger wrote:
>> Yes, I'm chasing a distinct bug. The header for mine is
>>
>> Jul 21 12:19:37 larrylap kernel: kernel BUG at net/core/dev.c:1328!
>
> Ok, that one is fixed now in my tree. Or at least it's turned into a
> warning, so the machine should work.
>
>> With an invalid opcode, mine is likely due to stack corruption.
>
> No, invalid opcode is because we use the "ud2" instruction for BUG(),
> which causes an invalid op exception. So any BUG[_ON]() will always cause
> that on x86.
Thanks for the explanation.
With your latest tree, I do get the warning. Unfortunately, it still breaks my
wireless and I still need to do the bisection. That is complicated by getting a
kernel that won't build after the first try. I think I now have a workaround.
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 20:21 ` David Miller
@ 2008-07-21 20:38 ` Larry Finger
2008-07-21 20:46 ` David Miller
0 siblings, 1 reply; 83+ messages in thread
From: Larry Finger @ 2008-07-21 20:38 UTC (permalink / raw)
To: David Miller
Cc: mingo, ischram, torvalds, akpm, netdev, linux-kernel,
linux-wireless, j
David Miller wrote:
>
> No further backtrace? That will tell us what driver is causing
> this.
Yes, I have a full backtrace.
It starts with possible recursive locking in NetworkManager, and goes directly
into the Warning - this came from a later pull of Linus's tree.
Jul 21 15:11:07 larrylap kernel: [ INFO: possible recursive locking detected ]
Jul 21 15:11:07 larrylap kernel: 2.6.26-Linus-git-05614-ge89970a #8
Jul 21 15:11:07 larrylap kernel: ---------------------------------------------
Jul 21 15:11:07 larrylap kernel: NetworkManager/2661 is trying to acquire lock:
Jul 21 15:11:07 larrylap kernel: (&dev->addr_list_lock){-...}, at:
[<ffffffff803a2961>] dev_mc_sync+0x19/0x57
Jul 21 15:11:07 larrylap kernel:
Jul 21 15:11:07 larrylap kernel: but task is already holding lock:
Jul 21 15:11:07 larrylap kernel: (&dev->addr_list_lock){-...}, at:
[<ffffffff8039e7c5>] dev_set_rx_mode+0x19/0x2e
Jul 21 15:11:07 larrylap kernel:
Jul 21 15:11:07 larrylap kernel: other info that might help us debug this:
Jul 21 15:11:07 larrylap kernel: 2 locks held by NetworkManager/2661:
Jul 21 15:11:07 larrylap kernel: #0: (rtnl_mutex){--..}, at:
[<ffffffff803a8318>] rtnetlink_rcv+0x12/0x27
Jul 21 15:11:07 larrylap kernel: #1: (&dev->addr_list_lock){-...}, at:
[<ffffffff8039e7c5>] dev_set_rx_mode+0x19/0x2e
Jul 21 15:11:07 larrylap kernel:
Jul 21 15:11:07 larrylap kernel: stack backtrace:
Jul 21 15:11:07 larrylap kernel: Pid: 2661, comm: NetworkManager Not tainted
2.6.26-Linus-git-05614-ge89970a #8
Jul 21 15:11:07 larrylap kernel:
Jul 21 15:11:07 larrylap kernel: Call Trace:
Jul 21 15:11:07 larrylap kernel: [<ffffffff80251b02>] __lock_acquire+0xb7b/0xecc
Jul 21 15:11:07 larrylap kernel: [<ffffffff80251ea4>] lock_acquire+0x51/0x6a
Jul 21 15:11:07 larrylap kernel: [<ffffffff803a2961>] dev_mc_sync+0x19/0x57
Jul 21 15:11:07 larrylap kernel: [<ffffffff80408f9c>] _spin_lock_bh+0x23/0x2c
Jul 21 15:11:07 larrylap kernel: [<ffffffff803a2961>] dev_mc_sync+0x19/0x57
Jul 21 15:11:07 larrylap kernel: [<ffffffff8039e7cd>] dev_set_rx_mode+0x21/0x2e
Jul 21 15:11:07 larrylap kernel: [<ffffffff803a036c>] dev_open+0x8e/0xb0
Jul 21 15:11:07 larrylap kernel: [<ffffffff8039fd13>] dev_change_flags+0xa6/0x164
Jul 21 15:11:07 larrylap kernel: [<ffffffff803a7421>] do_setlink+0x286/0x349
Jul 21 15:11:07 larrylap kernel: [<ffffffff803a832d>] rtnetlink_rcv_msg+0x0/0x1ec
Jul 21 15:11:07 larrylap syslog-ng[2488]: last message repeated 2 times
Jul 21 15:11:07 larrylap kernel: [<ffffffff803a766e>] rtnl_setlink+0x10b/0x10d
Jul 21 15:11:07 larrylap kernel: [<ffffffff803a832d>] rtnetlink_rcv_msg+0x0/0x1ec
Jul 21 15:11:07 larrylap kernel: [<ffffffff803b1bcf>] netlink_rcv_skb+0x34/0x7d
Jul 21 15:11:07 larrylap kernel: [<ffffffff803a8327>] rtnetlink_rcv+0x21/0x27
Jul 21 15:11:07 larrylap kernel: [<ffffffff803b16cf>] netlink_unicast+0x1f0/0x261
Jul 21 15:11:07 larrylap kernel: [<ffffffff8039a448>] __alloc_skb+0x66/0x12a
Jul 21 15:11:07 larrylap kernel: [<ffffffff803b19a8>] netlink_sendmsg+0x268/0x27b
Jul 21 15:11:07 larrylap rpc.idmapd[2783]: main:
fcntl(/var/lib/nfs/rpc_pipefs/nfs): Invalid argument
Jul 21 15:11:07 larrylap kernel: [<ffffffff80393b8d>] sock_sendmsg+0xcb/0xe3
Jul 21 15:11:07 larrylap kernel: [<ffffffff80246aab>]
autoremove_wake_function+0x0/0x2e
Jul 21 15:11:07 larrylap kernel: [<ffffffff8039b194>] verify_iovec+0x46/0x82
Jul 21 15:11:07 larrylap kernel: [<ffffffff80393dbc>] sys_sendmsg+0x217/0x28a
Jul 21 15:11:07 larrylap kernel: [<ffffffff80393516>] sockfd_lookup_light+0x1a/0x52
Jul 21 15:11:07 larrylap kernel: [<ffffffff80250990>]
trace_hardirqs_on_caller+0xef/0x113
Jul 21 15:11:07 larrylap kernel: [<ffffffff80408b14>]
trace_hardirqs_on_thunk+0x3a/0x3f
Jul 21 15:11:07 larrylap kernel: [<ffffffff8020be9b>]
system_call_after_swapgs+0x7b/0x80
Jul 21 15:11:07 larrylap kernel:
Jul 21 15:11:07 larrylap kernel: NET: Registered protocol family 17
Jul 21 15:11:08 larrylap kernel: ------------[ cut here ]------------
Jul 21 15:11:08 larrylap kernel: WARNING: at net/core/dev.c:1330
__netif_schedule+0x2c/0x98()
Jul 21 15:11:08 larrylap kernel: Modules linked in: snd_seq_device af_packet nfs
lockd nfs_acl sunrpc rfkill_input cpufreq_conservative cpufreq_userspace
cpufreq_powersave powernow_k8 fuse loop dm_mod arc4 ecb crypto_blkcipher b43
firmware_class rfkill snd_hda_intel mac80211 cfg80211 snd_pcm snd_timer
led_class snd soundcore input_polldev ac k8temp button snd_page_alloc battery
sr_mod forcedeth cdrom serio_raw hwmon ssb sg ehci_hcd sd_mod ohci_hcd usbcore
edd fan thermal processor ext3 mbcache jbd pata_amd ahci libata scsi_mod dock
Jul 21 15:11:08 larrylap kernel: Pid: 2035, comm: b43 Not tainted
2.6.26-Linus-git-05614-ge89970a #8
Jul 21 15:11:08 larrylap kernel:
Jul 21 15:11:08 larrylap kernel: Call Trace:
Jul 21 15:11:08 larrylap kernel: [<ffffffff80233f6d>] warn_on_slowpath+0x51/0x8c
Jul 21 15:11:08 larrylap kernel: [<ffffffff8039d7f3>] __netif_schedule+0x2c/0x98
Jul 21 15:11:08 larrylap kernel: [<ffffffffa018b44d>]
ieee80211_scan_completed+0x25b/0x2e1 [mac80211]
Jul 21 15:11:08 larrylap kernel: [<ffffffffa018b6ce>]
ieee80211_sta_scan_work+0x0/0x1b8 [mac80211]
Jul 21 15:11:08 larrylap kernel: [<ffffffff8024325e>] run_workqueue+0xf0/0x1f2
Jul 21 15:11:08 larrylap kernel: [<ffffffff8024343b>] worker_thread+0xdb/0xea
Jul 21 15:11:08 larrylap kernel: [<ffffffff80246aab>]
autoremove_wake_function+0x0/0x2e
Jul 21 15:11:08 larrylap avahi-daemon[2877]: Found user 'avahi' (UID 102) and
group 'avahi' (GID 104).
Jul 21 15:11:09 larrylap kernel: [<ffffffff80243360>] worker_thread+0x0/0xea
Jul 21 15:11:09 larrylap kernel: [<ffffffff8024678b>] kthread+0x47/0x73
Jul 21 15:11:09 larrylap avahi-daemon[2877]: Successfully dropped root privileges.
Jul 21 15:11:09 larrylap kernel: [<ffffffff80408b14>]
trace_hardirqs_on_thunk+0x3a/0x3f
Jul 21 15:11:09 larrylap avahi-daemon[2877]: avahi-daemon 0.6.22 starting up.
Jul 21 15:11:09 larrylap kernel: [<ffffffff8020cea9>] child_rip+0xa/0x11
Jul 21 15:11:09 larrylap kernel: [<ffffffff8020c4df>] restore_args+0x0/0x30
Jul 21 15:11:09 larrylap kernel: [<ffffffff8024671f>] kthreadd+0x188/0x1ad
Jul 21 15:11:09 larrylap kernel: [<ffffffff80246744>] kthread+0x0/0x73
Jul 21 15:11:09 larrylap kernel: [<ffffffff8020ce9f>] child_rip+0x0/0x11
Jul 21 15:11:09 larrylap kernel:
Jul 21 15:11:09 larrylap kernel: ---[ end trace 030d0589d3c6c7f5 ]---
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 20:38 ` Larry Finger
@ 2008-07-21 20:46 ` David Miller
2008-07-21 20:51 ` Patrick McHardy
0 siblings, 1 reply; 83+ messages in thread
From: David Miller @ 2008-07-21 20:46 UTC (permalink / raw)
To: Larry.Finger
Cc: mingo, ischram, torvalds, akpm, netdev, linux-kernel,
linux-wireless, j
From: Larry Finger <Larry.Finger@lwfinger.net>
Date: Mon, 21 Jul 2008 15:38:47 -0500
> David Miller wrote:
> >
> > No further backtrace? That will tell us what driver is causing
> > this.
>
> Yes, I have a full backtrace.
>
> It starts with possible recursive locking in NetworkManager, and goes directly
> into the Warning - this came from a later pull of Linus's tree.
That helps a lot, I'm looking at this now.
Thanks.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 20:46 ` David Miller
@ 2008-07-21 20:51 ` Patrick McHardy
2008-07-21 21:01 ` David Miller
0 siblings, 1 reply; 83+ messages in thread
From: Patrick McHardy @ 2008-07-21 20:51 UTC (permalink / raw)
To: David Miller
Cc: Larry.Finger, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
David Miller wrote:
> From: Larry Finger <Larry.Finger@lwfinger.net>
> Date: Mon, 21 Jul 2008 15:38:47 -0500
>
>> David Miller wrote:
>>> No further backtrace? That will tell us what driver is causing
>>> this.
>> Yes, I have a full backtrace.
>>
>> It starts with possible recursive locking in NetworkManager, and goes directly
>> into the Warning - this came from a later pull of Linus's tree.
>
> That helps a lot, I'm looking at this now.
I'm guessing this needs similar lockdep class initializations
to _xmit_lock since it essentially has the same nesting rules.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 20:51 ` Patrick McHardy
@ 2008-07-21 21:01 ` David Miller
2008-07-21 21:06 ` Patrick McHardy
0 siblings, 1 reply; 83+ messages in thread
From: David Miller @ 2008-07-21 21:01 UTC (permalink / raw)
To: kaber
Cc: Larry.Finger, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
From: Patrick McHardy <kaber@trash.net>
Date: Mon, 21 Jul 2008 22:51:53 +0200
> David Miller wrote:
> > From: Larry Finger <Larry.Finger@lwfinger.net>
> > Date: Mon, 21 Jul 2008 15:38:47 -0500
> >
> >> David Miller wrote:
> >>> No further backtrace? That will tell us what driver is causing
> >>> this.
> >> Yes, I have a full backtrace.
> >>
> >> It starts with possible recursive locking in NetworkManager, and goes directly
> >> into the Warning - this came from a later pull of Linus's tree.
> >
> > That helps a lot, I'm looking at this now.
>
> I'm guessing this needs similar lockdep class initializations
> to _xmit_lock since it essentially has the same nesting rules.
Yes, I figured that out just now :-)
Maybe something like the following should do it?
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 9737c06..a641eea 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -5041,6 +5041,7 @@ static int bond_check_params(struct bond_params *params)
}
static struct lock_class_key bonding_netdev_xmit_lock_key;
+static struct lock_class_key bonding_netdev_addr_lock_key;
static void bond_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -5052,6 +5053,8 @@ static void bond_set_lockdep_class_one(struct net_device *dev,
static void bond_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock,
+ &bonding_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, bond_set_lockdep_class_one, NULL);
}
diff --git a/drivers/net/hamradio/bpqether.c b/drivers/net/hamradio/bpqether.c
index b6500b2..58f4b1d 100644
--- a/drivers/net/hamradio/bpqether.c
+++ b/drivers/net/hamradio/bpqether.c
@@ -123,6 +123,7 @@ static LIST_HEAD(bpq_devices);
* off into a separate class since they always nest.
*/
static struct lock_class_key bpq_netdev_xmit_lock_key;
+static struct lock_class_key bpq_netdev_addr_lock_key;
static void bpq_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -133,6 +134,7 @@ static void bpq_set_lockdep_class_one(struct net_device *dev,
static void bpq_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock, &bpq_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, bpq_set_lockdep_class_one, NULL);
}
diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
index efbc155..4239450 100644
--- a/drivers/net/macvlan.c
+++ b/drivers/net/macvlan.c
@@ -276,6 +276,7 @@ static int macvlan_change_mtu(struct net_device *dev, int new_mtu)
* separate class since they always nest.
*/
static struct lock_class_key macvlan_netdev_xmit_lock_key;
+static struct lock_class_key macvlan_netdev_addr_lock_key;
#define MACVLAN_FEATURES \
(NETIF_F_SG | NETIF_F_ALL_CSUM | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
@@ -295,6 +296,8 @@ static void macvlan_set_lockdep_class_one(struct net_device *dev,
static void macvlan_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock,
+ &macvlan_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, macvlan_set_lockdep_class_one, NULL);
}
diff --git a/drivers/net/wireless/hostap/hostap_hw.c b/drivers/net/wireless/hostap/hostap_hw.c
index 13d5882..3153fe9 100644
--- a/drivers/net/wireless/hostap/hostap_hw.c
+++ b/drivers/net/wireless/hostap/hostap_hw.c
@@ -3101,6 +3101,7 @@ static void prism2_clear_set_tim_queue(local_info_t *local)
* This is a natural nesting, which needs a split lock type.
*/
static struct lock_class_key hostap_netdev_xmit_lock_key;
+static struct lock_class_key hostap_netdev_addr_lock_key;
static void prism2_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -3112,6 +3113,8 @@ static void prism2_set_lockdep_class_one(struct net_device *dev,
static void prism2_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock,
+ &hostap_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, prism2_set_lockdep_class_one, NULL);
}
diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
index f42bc2b..4bf014e 100644
--- a/net/8021q/vlan_dev.c
+++ b/net/8021q/vlan_dev.c
@@ -569,6 +569,7 @@ static void vlan_dev_set_rx_mode(struct net_device *vlan_dev)
* separate class since they always nest.
*/
static struct lock_class_key vlan_netdev_xmit_lock_key;
+static struct lock_class_key vlan_netdev_addr_lock_key;
static void vlan_dev_set_lockdep_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -581,6 +582,9 @@ static void vlan_dev_set_lockdep_one(struct net_device *dev,
static void vlan_dev_set_lockdep_class(struct net_device *dev, int subclass)
{
+ lockdep_set_class_and_subclass(&dev->addr_list_lock,
+ &vlan_netdev_addr_lock_key,
+ subclass);
netdev_for_each_tx_queue(dev, vlan_dev_set_lockdep_one, &subclass);
}
diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
index fccc250..532e4fa 100644
--- a/net/netrom/af_netrom.c
+++ b/net/netrom/af_netrom.c
@@ -73,6 +73,7 @@ static const struct proto_ops nr_proto_ops;
* separate class since they always nest.
*/
static struct lock_class_key nr_netdev_xmit_lock_key;
+static struct lock_class_key nr_netdev_addr_lock_key;
static void nr_set_lockdep_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -83,6 +84,7 @@ static void nr_set_lockdep_one(struct net_device *dev,
static void nr_set_lockdep_key(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock, &nr_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, nr_set_lockdep_one, NULL);
}
diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
index dbc963b..a7f1ce1 100644
--- a/net/rose/af_rose.c
+++ b/net/rose/af_rose.c
@@ -74,6 +74,7 @@ ax25_address rose_callsign;
* separate class since they always nest.
*/
static struct lock_class_key rose_netdev_xmit_lock_key;
+static struct lock_class_key rose_netdev_addr_lock_key;
static void rose_set_lockdep_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -84,6 +85,7 @@ static void rose_set_lockdep_one(struct net_device *dev,
static void rose_set_lockdep_key(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock, &rose_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, rose_set_lockdep_one, NULL);
}
^ permalink raw reply related [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 21:01 ` David Miller
@ 2008-07-21 21:06 ` Patrick McHardy
2008-07-21 21:35 ` Patrick McHardy
0 siblings, 1 reply; 83+ messages in thread
From: Patrick McHardy @ 2008-07-21 21:06 UTC (permalink / raw)
To: David Miller
Cc: Larry.Finger, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
David Miller wrote:
> From: Patrick McHardy <kaber@trash.net>
> Date: Mon, 21 Jul 2008 22:51:53 +0200
>
>> David Miller wrote:
>>> From: Larry Finger <Larry.Finger@lwfinger.net>
>>> Date: Mon, 21 Jul 2008 15:38:47 -0500
>>>
>>>> David Miller wrote:
>>>>> No further backtrace? That will tell us what driver is causing
>>>>> this.
>>>> Yes, I have a full backtrace.
>>>>
>>>> It starts with possible recursive locking in NetworkManager, and goes directly
>>>> into the Warning - this came from a later pull of Linus's tree.
>>> That helps a lot, I'm looking at this now.
>> I'm guessing this needs similar lockdep class initializations
>> to _xmit_lock since it essentially has the same nesting rules.
>
> Yes, I figured that out just now :-)
>
> Maybe something like the following should do it?
It looks correct in any case. I'm not sure whether it fixes
this lockdep warning though, according to the backtrace and
module list its b43 and dev_mc_sync in net/mac80211/main.c
that are causing the error, which don't seem to be included
in your patch. I'm unable to find where it previously
initialized the xmit_lock lockdep class though, so I must
be missing something :)
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 21:06 ` Patrick McHardy
@ 2008-07-21 21:35 ` Patrick McHardy
2008-07-21 21:42 ` Patrick McHardy
2008-07-21 21:51 ` Larry Finger
0 siblings, 2 replies; 83+ messages in thread
From: Patrick McHardy @ 2008-07-21 21:35 UTC (permalink / raw)
To: David Miller
Cc: Larry.Finger, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
[-- Attachment #1: Type: text/plain, Size: 659 bytes --]
Patrick McHardy wrote:
> David Miller wrote:
>> Maybe something like the following should do it?
>
>
> It looks correct in any case. I'm not sure whether it fixes
> this lockdep warning though, according to the backtrace and
> module list its b43 and dev_mc_sync in net/mac80211/main.c
> that are causing the error, which don't seem to be included
> in your patch. I'm unable to find where it previously
> initialized the xmit_lock lockdep class though, so I must
> be missing something :)
This is what I was missing, we're setting a lockdep class
by default depending on dev->type. This patch combined
with yours should fix all addr_list_lock warnings.
[-- Attachment #2: x --]
[-- Type: text/plain, Size: 2579 bytes --]
net: set lockdep class for dev->addr_list_lock
Initialize dev->addr_list_lock lockdep classes equally to dev->_xmit_lock.
Signed-off-by: Patrick McHardy <kaber@trash.net>
diff --git a/net/core/dev.c b/net/core/dev.c
index 2eed17b..9cfed90 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -299,6 +299,7 @@ static const char *netdev_lock_name[] =
"_xmit_NONE"};
static struct lock_class_key netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
+static struct lock_class_key netdev_addr_lock_key[ARRAY_SIZE(netdev_lock_type)];
static inline unsigned short netdev_lock_pos(unsigned short dev_type)
{
@@ -311,8 +312,8 @@ static inline unsigned short netdev_lock_pos(unsigned short dev_type)
return ARRAY_SIZE(netdev_lock_type) - 1;
}
-static inline void netdev_set_lockdep_class(spinlock_t *lock,
- unsigned short dev_type)
+static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock,
+ unsigned short dev_type)
{
int i;
@@ -320,9 +321,23 @@ static inline void netdev_set_lockdep_class(spinlock_t *lock,
lockdep_set_class_and_name(lock, &netdev_xmit_lock_key[i],
netdev_lock_name[i]);
}
+
+static inline void netdev_set_addr_lockdep_class(spinlock_t *lock,
+ unsigned short dev_type)
+{
+ int i;
+
+ i = netdev_lock_pos(dev_type);
+ lockdep_set_class_and_name(lock, &netdev_addr_lock_key[i],
+ netdev_lock_name[i]);
+}
#else
-static inline void netdev_set_lockdep_class(spinlock_t *lock,
- unsigned short dev_type)
+static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock,
+ unsigned short dev_type)
+{
+}
+static inline void netdev_set_addr_lockdep_class(spinlock_t *lock,
+ unsigned short dev_type)
{
}
#endif
@@ -3843,14 +3858,15 @@ static void __netdev_init_queue_locks_one(struct net_device *dev,
void *_unused)
{
spin_lock_init(&dev_queue->_xmit_lock);
- netdev_set_lockdep_class(&dev_queue->_xmit_lock, dev->type);
+ netdev_set_xmit_lockdep_class(&dev_queue->_xmit_lock, dev->type);
dev_queue->xmit_lock_owner = -1;
}
-static void netdev_init_queue_locks(struct net_device *dev)
+static void netdev_init_locks(struct net_device *dev)
{
netdev_for_each_tx_queue(dev, __netdev_init_queue_locks_one, NULL);
__netdev_init_queue_locks_one(dev, &dev->rx_queue, NULL);
+ netdev_set_addr_lockdep_class(&dev->addr_list_lock, dev->type);
}
/**
@@ -3888,7 +3904,7 @@ int register_netdevice(struct net_device *dev)
net = dev_net(dev);
spin_lock_init(&dev->addr_list_lock);
- netdev_init_queue_locks(dev);
+ netdev_init_locks(dev);
dev->iflink = -1;
^ permalink raw reply related [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 21:35 ` Patrick McHardy
@ 2008-07-21 21:42 ` Patrick McHardy
2008-07-21 21:51 ` Larry Finger
1 sibling, 0 replies; 83+ messages in thread
From: Patrick McHardy @ 2008-07-21 21:42 UTC (permalink / raw)
To: David Miller
Cc: Larry.Finger, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
[-- Attachment #1: Type: text/plain, Size: 814 bytes --]
Patrick McHardy wrote:
> Patrick McHardy wrote:
>> David Miller wrote:
>>> Maybe something like the following should do it?
>>
>>
>> It looks correct in any case. I'm not sure whether it fixes
>> this lockdep warning though, according to the backtrace and
>> module list its b43 and dev_mc_sync in net/mac80211/main.c
>> that are causing the error, which don't seem to be included
>> in your patch. I'm unable to find where it previously
>> initialized the xmit_lock lockdep class though, so I must
>> be missing something :)
>
> This is what I was missing, we're setting a lockdep class
> by default depending on dev->type. This patch combined
> with yours should fix all addr_list_lock warnings.
This one is a bit nicer, since we only have a single
addr_list_lock we don't need to pass a pointer to the
lock.
[-- Attachment #2: x --]
[-- Type: text/plain, Size: 2233 bytes --]
net: set lockdep class for dev->addr_list_lock
Initialize dev->addr_list_lock lockdep classes equally to dev->_xmit_lock.
Signed-off-by: Patrick McHardy <kaber@trash.net>
diff --git a/net/core/dev.c b/net/core/dev.c
index 2eed17b..6f8b6c5 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -299,6 +299,7 @@ static const char *netdev_lock_name[] =
"_xmit_NONE"};
static struct lock_class_key netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
+static struct lock_class_key netdev_addr_lock_key[ARRAY_SIZE(netdev_lock_type)];
static inline unsigned short netdev_lock_pos(unsigned short dev_type)
{
@@ -311,8 +312,8 @@ static inline unsigned short netdev_lock_pos(unsigned short dev_type)
return ARRAY_SIZE(netdev_lock_type) - 1;
}
-static inline void netdev_set_lockdep_class(spinlock_t *lock,
- unsigned short dev_type)
+static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock,
+ unsigned short dev_type)
{
int i;
@@ -320,9 +321,22 @@ static inline void netdev_set_lockdep_class(spinlock_t *lock,
lockdep_set_class_and_name(lock, &netdev_xmit_lock_key[i],
netdev_lock_name[i]);
}
+
+static inline void netdev_set_addr_lockdep_class(struct net_device *dev)
+{
+ int i;
+
+ i = netdev_lock_pos(dev->type);
+ lockdep_set_class_and_name(&dev->addr_list_lock,
+ &netdev_addr_lock_key[i],
+ netdev_lock_name[i]);
+}
#else
-static inline void netdev_set_lockdep_class(spinlock_t *lock,
- unsigned short dev_type)
+static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock,
+ unsigned short dev_type)
+{
+}
+static inline void netdev_set_addr_lockdep_class(struct net_device *dev)
{
}
#endif
@@ -3843,7 +3857,7 @@ static void __netdev_init_queue_locks_one(struct net_device *dev,
void *_unused)
{
spin_lock_init(&dev_queue->_xmit_lock);
- netdev_set_lockdep_class(&dev_queue->_xmit_lock, dev->type);
+ netdev_set_xmit_lockdep_class(&dev_queue->_xmit_lock, dev->type);
dev_queue->xmit_lock_owner = -1;
}
@@ -3888,6 +3902,7 @@ int register_netdevice(struct net_device *dev)
net = dev_net(dev);
spin_lock_init(&dev->addr_list_lock);
+ netdev_set_addr_lockdep_class(dev);
netdev_init_queue_locks(dev);
dev->iflink = -1;
^ permalink raw reply related [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 21:35 ` Patrick McHardy
2008-07-21 21:42 ` Patrick McHardy
@ 2008-07-21 21:51 ` Larry Finger
2008-07-21 22:04 ` Patrick McHardy
1 sibling, 1 reply; 83+ messages in thread
From: Larry Finger @ 2008-07-21 21:51 UTC (permalink / raw)
To: Patrick McHardy
Cc: David Miller, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
Patrick McHardy wrote:
> Patrick McHardy wrote:
>> David Miller wrote:
>>> Maybe something like the following should do it?
>>
>>
>> It looks correct in any case. I'm not sure whether it fixes
>> this lockdep warning though, according to the backtrace and
>> module list its b43 and dev_mc_sync in net/mac80211/main.c
>> that are causing the error, which don't seem to be included
>> in your patch. I'm unable to find where it previously
>> initialized the xmit_lock lockdep class though, so I must
>> be missing something :)
>
> This is what I was missing, we're setting a lockdep class
> by default depending on dev->type. This patch combined
> with yours should fix all addr_list_lock warnings.
No cigar yet. I tried davem's patch first, then yours on top of his. I still get
both the recursive locking and the kernel warning.
BTW, wireless doesn't work but if I plug in the wire, then networking is OK. It
seems to be in mac80211, which is strange because I routinely run the latest
wireless-testing kernel, and all the wireless bits should be there already.
I'm still plugging away at the bisection. I think I got away from the kernel
that won't build.
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 21:51 ` Larry Finger
@ 2008-07-21 22:04 ` Patrick McHardy
2008-07-21 22:40 ` Larry Finger
0 siblings, 1 reply; 83+ messages in thread
From: Patrick McHardy @ 2008-07-21 22:04 UTC (permalink / raw)
To: Larry Finger
Cc: David Miller, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
[-- Attachment #1: Type: text/plain, Size: 398 bytes --]
Larry Finger wrote:
> Patrick McHardy wrote:
>>
>> This is what I was missing, we're setting a lockdep class
>> by default depending on dev->type. This patch combined
>> with yours should fix all addr_list_lock warnings.
>
> No cigar yet. I tried davem's patch first, then yours on top of his. I
> still get both the recursive locking and the kernel warning.
Does this one earn me my cigar? :)
[-- Attachment #2: x --]
[-- Type: text/plain, Size: 389 bytes --]
diff --git a/net/core/dev.c b/net/core/dev.c
index 2eed17b..371b1a0 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -259,7 +259,7 @@ static RAW_NOTIFIER_HEAD(netdev_chain);
DEFINE_PER_CPU(struct softnet_data, softnet_data);
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCKDEP
/*
* register_netdevice() inits txq->_xmit_lock and sets lockdep class
* according to dev->type
^ permalink raw reply related [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 22:04 ` Patrick McHardy
@ 2008-07-21 22:40 ` Larry Finger
2008-07-21 23:15 ` David Miller
0 siblings, 1 reply; 83+ messages in thread
From: Larry Finger @ 2008-07-21 22:40 UTC (permalink / raw)
To: Patrick McHardy
Cc: David Miller, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
Patrick McHardy wrote:
> Larry Finger wrote:
>> Patrick McHardy wrote:
>>>
>>> This is what I was missing, we're setting a lockdep class
>>> by default depending on dev->type. This patch combined
>>> with yours should fix all addr_list_lock warnings.
>>
>> No cigar yet. I tried davem's patch first, then yours on top of his. I
>> still get both the recursive locking and the kernel warning.
>
> Does this one earn me my cigar? :)
Sorry :(
I used the davem patch, the second version of your first one, and your second
one. Both problems persist.
Still plugging away on bisection.
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 22:40 ` Larry Finger
@ 2008-07-21 23:15 ` David Miller
2008-07-22 6:34 ` Larry Finger
0 siblings, 1 reply; 83+ messages in thread
From: David Miller @ 2008-07-21 23:15 UTC (permalink / raw)
To: Larry.Finger
Cc: kaber, mingo, ischram, torvalds, akpm, netdev, linux-kernel,
linux-wireless, j
From: Larry Finger <Larry.Finger@lwfinger.net>
Date: Mon, 21 Jul 2008 17:40:10 -0500
> Sorry :(
>
> I used the davem patch, the second version of your first one, and your second
> one. Both problems persist.
>
> Still plugging away on bisection.
GIT bisecting the lockdep problem is surely going the land you on:
commit e308a5d806c852f56590ffdd3834d0df0cbed8d7
Author: David S. Miller <davem@davemloft.net>
Date: Tue Jul 15 00:13:44 2008 -0700
netdev: Add netdev->addr_list_lock protection.
Add netif_addr_{lock,unlock}{,_bh}() helpers.
Use them to protect operations that operate on or read
the network device unicast and multicast address lists.
Also use them in cases where the code simply wants to
block calls into the driver's ->set_rx_mode() and
->set_multicast_list() methods.
Signed-off-by: David S. Miller <davem@davemloft.net>
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-21 23:15 ` David Miller
@ 2008-07-22 6:34 ` Larry Finger
2008-07-22 10:51 ` Jarek Poplawski
2008-07-22 11:32 ` David Miller
0 siblings, 2 replies; 83+ messages in thread
From: Larry Finger @ 2008-07-22 6:34 UTC (permalink / raw)
To: David Miller
Cc: kaber, mingo, ischram, torvalds, akpm, netdev, linux-kernel,
linux-wireless, j
David Miller wrote:
> From: Larry Finger <Larry.Finger@lwfinger.net>
> Date: Mon, 21 Jul 2008 17:40:10 -0500
>
>> Sorry :(
>>
>> I used the davem patch, the second version of your first one, and your second
>> one. Both problems persist.
>>
>> Still plugging away on bisection.
>
> GIT bisecting the lockdep problem is surely going the land you on:
>
> commit e308a5d806c852f56590ffdd3834d0df0cbed8d7
No. It landed on this one.
37437bb2e1ae8af470dfcd5b4ff454110894ccaf is first bad commit
commit 37437bb2e1ae8af470dfcd5b4ff454110894ccaf
Author: David S. Miller <davem@davemloft.net>
Date: Wed Jul 16 02:15:04 2008 -0700
pkt_sched: Schedule qdiscs instead of netdev_queue.
When we have shared qdiscs, packets come out of the qdiscs
for multiple transmit queues.
Therefore it doesn't make any sense to schedule the transmit
queue when logically we cannot know ahead of time the TX
queue of the SKB that the qdisc->dequeue() will give us.
Just for sanity I added a BUG check to make sure we never
get into a state where the noop_qdisc is scheduled.
Signed-off-by: David S. Miller <davem@davemloft.net>
:040000 040000 4d13d1fb1ae37d9720c3db6b1368866e78621f55
f1a0f5e5a191e7904b528d9e10069a4324a5d328 M include
:040000 040000 3515aad52a2cdaaba85feeffc0944d7f07a19c96
4854d4f4df9726a2e8837037f82bde807bed2ede M net
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-22 6:34 ` Larry Finger
@ 2008-07-22 10:51 ` Jarek Poplawski
2008-07-22 11:32 ` David Miller
1 sibling, 0 replies; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-22 10:51 UTC (permalink / raw)
To: Larry Finger
Cc: David Miller, kaber, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
On 22-07-2008 08:34, Larry Finger wrote:
...
>>> I used the davem patch, the second version of your first one, and
>>> your second one. Both problems persist.
Could you send lockdep info after Patrick's "set lockdep classes"
patch?
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-22 6:34 ` Larry Finger
2008-07-22 10:51 ` Jarek Poplawski
@ 2008-07-22 11:32 ` David Miller
2008-07-22 12:52 ` Larry Finger
2008-07-22 13:02 ` Larry Finger
1 sibling, 2 replies; 83+ messages in thread
From: David Miller @ 2008-07-22 11:32 UTC (permalink / raw)
To: Larry.Finger
Cc: kaber, mingo, ischram, torvalds, akpm, netdev, linux-kernel,
linux-wireless, j
From: Larry Finger <Larry.Finger@lwfinger.net>
Date: Tue, 22 Jul 2008 01:34:28 -0500
> David Miller wrote:
> > From: Larry Finger <Larry.Finger@lwfinger.net>
> > Date: Mon, 21 Jul 2008 17:40:10 -0500
> >
> >> Sorry :(
> >>
> >> I used the davem patch, the second version of your first one, and your second
> >> one. Both problems persist.
> >>
> >> Still plugging away on bisection.
> >
> > GIT bisecting the lockdep problem is surely going the land you on:
> >
> > commit e308a5d806c852f56590ffdd3834d0df0cbed8d7
>
> No. It landed on this one.
For the lockdep warnings?
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-22 11:32 ` David Miller
@ 2008-07-22 12:52 ` Larry Finger
2008-07-22 20:43 ` David Miller
2008-07-22 13:02 ` Larry Finger
1 sibling, 1 reply; 83+ messages in thread
From: Larry Finger @ 2008-07-22 12:52 UTC (permalink / raw)
To: David Miller
Cc: kaber, mingo, ischram, torvalds, akpm, netdev, linux-kernel,
linux-wireless, j
David Miller wrote:
> From: Larry Finger <Larry.Finger@lwfinger.net>
> Date: Tue, 22 Jul 2008 01:34:28 -0500
>
>> David Miller wrote:
>>> From: Larry Finger <Larry.Finger@lwfinger.net>
>>> Date: Mon, 21 Jul 2008 17:40:10 -0500
>>>
>>>> Sorry :(
>>>>
>>>> I used the davem patch, the second version of your first one, and your second
>>>> one. Both problems persist.
>>>>
>>>> Still plugging away on bisection.
>>> GIT bisecting the lockdep problem is surely going the land you on:
>>>
>>> commit e308a5d806c852f56590ffdd3834d0df0cbed8d7
>> No. It landed on this one.
>
> For the lockdep warnings?
No - this one triggers the kernel BUG as follows:
------------[ cut here ]------------
kernel BUG at net/core/dev.c:1328!
invalid opcode: 0000 [1] SMP
CPU 0
Modules linked in: af_packet rfkill_input nfs lockd nfs_acl sunrpc
cpufreq_conservative cpufreq_userspace cpufreq_powersave powernow_k8 fuse loop
dm_mod arc4 ecb crypto_blkcipher b43 firmware_class rfkill mac80211 cfg80211
led_class input_polldev k8temp sr_mod battery ac ssb button hwmon forcedeth
cdrom serio_raw sg ohci_hcd ehci_hcd sd_mod usbcore edd fan thermal processor
ext3 mbcache jbd pata_amd ahci libata scsi_mod dock
Pid: 2003, comm: b43 Not tainted 2.6.26-rc8-Linus-git-01424-g37437bb #43
RIP: 0010:[<ffffffff803958c6>] [<ffffffff803958c6>] __netif_schedule+0x12/0x75
RSP: 0018:ffff8100b9e33de0 EFLAGS: 00010246
RAX: ffff8100b63819c0 RBX: ffffffff80545300 RCX: ffff8100b6381980
RDX: 00000000ffffffff RSI: 0000000000000001 RDI: ffffffff80545300
RBP: ffff8100b7b45158 R08: ffff8100b89d8000 R09: ffff8100b9d26000
R10: ffff8100b7b44480 R11: ffffffffa01239ef R12: ffff8100b7b44480
R13: ffff8100b9d26000 R14: ffff8100b89d8000 R15: 0000000000000000
FS: 00007f494406a6f0(0000) GS:ffffffff8055e000(0000) knlGS:0000000000000000
CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 00007f49440933dc CR3: 0000000000201000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process b43 (pid: 2003, threadinfo ffff8100b9e32000, task ffff8100b4a3e480)
Stack: ffff8100b7b45158 ffff8100b89d8900 ffff8100b7b45158 ffffffffa0158455
ffff8100ba3287c0 0000000000000246 0000000000000000 0000000000000000
ffff8100b9e33e70 ffff8100b7b451b8 ffff8100ba3287c0 ffff8100b7b451b0
Call Trace:
[<ffffffffa0158455>] ? :mac80211:ieee80211_scan_completed+0x25b/0x2e1
[<ffffffffa01586d6>] ? :mac80211:ieee80211_sta_scan_work+0x0/0x1b8
[<ffffffff8023f7d7>] ? run_workqueue+0xf1/0x1f3
[<ffffffff8023f9b4>] ? worker_thread+0xdb/0xea
[<ffffffff80243017>] ? autoremove_wake_function+0x0/0x2e
[<ffffffff8023f8d9>] ? worker_thread+0x0/0xea
[<ffffffff80242cff>] ? kthread+0x47/0x73
[<ffffffff80402845>] ? trace_hardirqs_on_thunk+0x35/0x3a
[<ffffffff8020cd48>] ? child_rip+0xa/0x12
[<ffffffff8020c45f>] ? restore_args+0x0/0x30
[<ffffffff8021d3b6>] ? flat_send_IPI_mask+0x0/0x67
[<ffffffff80242c93>] ? kthreadd+0x188/0x1ad
[<ffffffff80242c93>] ? kthreadd+0x188/0x1ad
[<ffffffff80242cb8>] ? kthread+0x0/0x73
[<ffffffff8020cd3e>] ? child_rip+0x0/0x12
Code: 00 00 75 0a 55 9d 5e 5b 5d e9 32 64 eb ff e8 21 73 eb ff 55 9d 59 5b 5d c3
55 53 48 89 fb 48 83 ec 08 48 81 ff 00 53 54 80 75 04 <0f> 0b eb fe 48 8d 47 30
f0 0f ba 28 01 19 d2 85 d2 75 4c 9c 5d
RIP [<ffffffff803958c6>] __netif_schedule+0x12/0x75
RSP <ffff8100b9e33de0>
---[ end trace 396dc6bdf73da468 ]---
I'll have to trace back to see which of the bisections produced both the lockdep
and the kernel bug.
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-22 11:32 ` David Miller
2008-07-22 12:52 ` Larry Finger
@ 2008-07-22 13:02 ` Larry Finger
2008-07-22 14:53 ` Patrick McHardy
2008-07-22 16:39 ` Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98() Larry Finger
1 sibling, 2 replies; 83+ messages in thread
From: Larry Finger @ 2008-07-22 13:02 UTC (permalink / raw)
To: David Miller
Cc: kaber, mingo, ischram, torvalds, akpm, netdev, linux-kernel,
linux-wireless, j
David Miller wrote:
> From: Larry Finger <Larry.Finger@lwfinger.net>
> Date: Tue, 22 Jul 2008 01:34:28 -0500
>
>> David Miller wrote:
>>> From: Larry Finger <Larry.Finger@lwfinger.net>
>>> Date: Mon, 21 Jul 2008 17:40:10 -0500
>>>
>>>> Sorry :(
>>>>
>>>> I used the davem patch, the second version of your first one, and your second
>>>> one. Both problems persist.
>>>>
>>>> Still plugging away on bisection.
>>> GIT bisecting the lockdep problem is surely going the land you on:
>>>
>>> commit e308a5d806c852f56590ffdd3834d0df0cbed8d7
>> No. It landed on this one.
>
> For the lockdep warnings?
When I was just one commit later, I got both the lockdep warning and the BUG.
This is the commit in question.
commit 16361127ebed0fb8f9d7cc94c6e137eaf710f676
Author: David S. Miller <davem@davemloft.net>
Date: Wed Jul 16 02:23:17 2008 -0700
pkt_sched: dev_init_scheduler() does not need to lock qdisc tree.
We are registering the device, there is no way anyone can get
at this object's qdiscs yet in any meaningful way.
Signed-off-by: David S. Miller <davem@davemloft.net>
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-22 13:02 ` Larry Finger
@ 2008-07-22 14:53 ` Patrick McHardy
2008-07-22 21:17 ` David Miller
2008-07-22 16:39 ` Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98() Larry Finger
1 sibling, 1 reply; 83+ messages in thread
From: Patrick McHardy @ 2008-07-22 14:53 UTC (permalink / raw)
To: Larry Finger
Cc: David Miller, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
[-- Attachment #1: Type: text/plain, Size: 1663 bytes --]
Larry Finger wrote:
> David Miller wrote:
>> From: Larry Finger <Larry.Finger@lwfinger.net>
>> Date: Tue, 22 Jul 2008 01:34:28 -0500
>>
>>>> GIT bisecting the lockdep problem is surely going the land you on:
>>>>
>>>> commit e308a5d806c852f56590ffdd3834d0df0cbed8d7
>>> No. It landed on this one.
>>
>> For the lockdep warnings?
>
> When I was just one commit later, I got both the lockdep warning and the
> BUG. This is the commit in question.
I actually don't see how you could still get the warning with
Dave's patch and the two I sent applied.
The warning is triggered by the dev_mc_sync call in
ieee80211_set_multicast_list:
dev_mc_sync(local->mdev, dev);
local->mdev is the wmaster device, which has its type set to
ARPHRD_IEEE80211. dev is regular wireless device with type set
to ARPHRD_ETHER. So they have distinct lockdep classes set
by register_netdevice.
The warning is:
Jul 21 15:11:07 larrylap kernel: NetworkManager/2661 is trying to
acquire lock:
Jul 21 15:11:07 larrylap kernel: (&dev->addr_list_lock){-...}, at:
[<ffffffff803a2961>] dev_mc_sync+0x19/0x57
Jul 21 15:11:07 larrylap kernel:
Jul 21 15:11:07 larrylap kernel: but task is already holding lock:
Jul 21 15:11:07 larrylap kernel: (&dev->addr_list_lock){-...}, at:
[<ffffffff8039e7c5>] dev_set_rx_mode+0x19/0x2e
Jul 21 15:11:07 larrylap kernel:
The only already held is dev->addr_list_lock, the one taken
by dev_mc_sync is local->mdev->addr_list_lock. And this shouldn't
cause any warnings because of the distinct lockdep classes.
Could you please retry with the three patches attached to this
mail? If the lockdep warning still triggers, please post it again.
[-- Attachment #2: 01.diff --]
[-- Type: text/x-diff, Size: 5578 bytes --]
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 9737c06..a641eea 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -5041,6 +5041,7 @@ static int bond_check_params(struct bond_params *params)
}
static struct lock_class_key bonding_netdev_xmit_lock_key;
+static struct lock_class_key bonding_netdev_addr_lock_key;
static void bond_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -5052,6 +5053,8 @@ static void bond_set_lockdep_class_one(struct net_device *dev,
static void bond_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock,
+ &bonding_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, bond_set_lockdep_class_one, NULL);
}
diff --git a/drivers/net/hamradio/bpqether.c b/drivers/net/hamradio/bpqether.c
index b6500b2..58f4b1d 100644
--- a/drivers/net/hamradio/bpqether.c
+++ b/drivers/net/hamradio/bpqether.c
@@ -123,6 +123,7 @@ static LIST_HEAD(bpq_devices);
* off into a separate class since they always nest.
*/
static struct lock_class_key bpq_netdev_xmit_lock_key;
+static struct lock_class_key bpq_netdev_addr_lock_key;
static void bpq_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -133,6 +134,7 @@ static void bpq_set_lockdep_class_one(struct net_device *dev,
static void bpq_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock, &bpq_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, bpq_set_lockdep_class_one, NULL);
}
diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
index efbc155..4239450 100644
--- a/drivers/net/macvlan.c
+++ b/drivers/net/macvlan.c
@@ -276,6 +276,7 @@ static int macvlan_change_mtu(struct net_device *dev, int new_mtu)
* separate class since they always nest.
*/
static struct lock_class_key macvlan_netdev_xmit_lock_key;
+static struct lock_class_key macvlan_netdev_addr_lock_key;
#define MACVLAN_FEATURES \
(NETIF_F_SG | NETIF_F_ALL_CSUM | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
@@ -295,6 +296,8 @@ static void macvlan_set_lockdep_class_one(struct net_device *dev,
static void macvlan_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock,
+ &macvlan_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, macvlan_set_lockdep_class_one, NULL);
}
diff --git a/drivers/net/wireless/hostap/hostap_hw.c b/drivers/net/wireless/hostap/hostap_hw.c
index 13d5882..3153fe9 100644
--- a/drivers/net/wireless/hostap/hostap_hw.c
+++ b/drivers/net/wireless/hostap/hostap_hw.c
@@ -3101,6 +3101,7 @@ static void prism2_clear_set_tim_queue(local_info_t *local)
* This is a natural nesting, which needs a split lock type.
*/
static struct lock_class_key hostap_netdev_xmit_lock_key;
+static struct lock_class_key hostap_netdev_addr_lock_key;
static void prism2_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -3112,6 +3113,8 @@ static void prism2_set_lockdep_class_one(struct net_device *dev,
static void prism2_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock,
+ &hostap_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, prism2_set_lockdep_class_one, NULL);
}
diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
index f42bc2b..4bf014e 100644
--- a/net/8021q/vlan_dev.c
+++ b/net/8021q/vlan_dev.c
@@ -569,6 +569,7 @@ static void vlan_dev_set_rx_mode(struct net_device *vlan_dev)
* separate class since they always nest.
*/
static struct lock_class_key vlan_netdev_xmit_lock_key;
+static struct lock_class_key vlan_netdev_addr_lock_key;
static void vlan_dev_set_lockdep_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -581,6 +582,9 @@ static void vlan_dev_set_lockdep_one(struct net_device *dev,
static void vlan_dev_set_lockdep_class(struct net_device *dev, int subclass)
{
+ lockdep_set_class_and_subclass(&dev->addr_list_lock,
+ &vlan_netdev_addr_lock_key,
+ subclass);
netdev_for_each_tx_queue(dev, vlan_dev_set_lockdep_one, &subclass);
}
diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
index fccc250..532e4fa 100644
--- a/net/netrom/af_netrom.c
+++ b/net/netrom/af_netrom.c
@@ -73,6 +73,7 @@ static const struct proto_ops nr_proto_ops;
* separate class since they always nest.
*/
static struct lock_class_key nr_netdev_xmit_lock_key;
+static struct lock_class_key nr_netdev_addr_lock_key;
static void nr_set_lockdep_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -83,6 +84,7 @@ static void nr_set_lockdep_one(struct net_device *dev,
static void nr_set_lockdep_key(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock, &nr_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, nr_set_lockdep_one, NULL);
}
diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
index dbc963b..a7f1ce1 100644
--- a/net/rose/af_rose.c
+++ b/net/rose/af_rose.c
@@ -74,6 +74,7 @@ ax25_address rose_callsign;
* separate class since they always nest.
*/
static struct lock_class_key rose_netdev_xmit_lock_key;
+static struct lock_class_key rose_netdev_addr_lock_key;
static void rose_set_lockdep_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -84,6 +85,7 @@ static void rose_set_lockdep_one(struct net_device *dev,
static void rose_set_lockdep_key(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock, &rose_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, rose_set_lockdep_one, NULL);
}
[-- Attachment #3: 02.diff --]
[-- Type: text/x-diff, Size: 2233 bytes --]
net: set lockdep class for dev->addr_list_lock
Initialize dev->addr_list_lock lockdep classes equally to dev->_xmit_lock.
Signed-off-by: Patrick McHardy <kaber@trash.net>
diff --git a/net/core/dev.c b/net/core/dev.c
index 2eed17b..6f8b6c5 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -299,6 +299,7 @@ static const char *netdev_lock_name[] =
"_xmit_NONE"};
static struct lock_class_key netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
+static struct lock_class_key netdev_addr_lock_key[ARRAY_SIZE(netdev_lock_type)];
static inline unsigned short netdev_lock_pos(unsigned short dev_type)
{
@@ -311,8 +312,8 @@ static inline unsigned short netdev_lock_pos(unsigned short dev_type)
return ARRAY_SIZE(netdev_lock_type) - 1;
}
-static inline void netdev_set_lockdep_class(spinlock_t *lock,
- unsigned short dev_type)
+static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock,
+ unsigned short dev_type)
{
int i;
@@ -320,9 +321,22 @@ static inline void netdev_set_lockdep_class(spinlock_t *lock,
lockdep_set_class_and_name(lock, &netdev_xmit_lock_key[i],
netdev_lock_name[i]);
}
+
+static inline void netdev_set_addr_lockdep_class(struct net_device *dev)
+{
+ int i;
+
+ i = netdev_lock_pos(dev->type);
+ lockdep_set_class_and_name(&dev->addr_list_lock,
+ &netdev_addr_lock_key[i],
+ netdev_lock_name[i]);
+}
#else
-static inline void netdev_set_lockdep_class(spinlock_t *lock,
- unsigned short dev_type)
+static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock,
+ unsigned short dev_type)
+{
+}
+static inline void netdev_set_addr_lockdep_class(struct net_device *dev)
{
}
#endif
@@ -3843,7 +3857,7 @@ static void __netdev_init_queue_locks_one(struct net_device *dev,
void *_unused)
{
spin_lock_init(&dev_queue->_xmit_lock);
- netdev_set_lockdep_class(&dev_queue->_xmit_lock, dev->type);
+ netdev_set_xmit_lockdep_class(&dev_queue->_xmit_lock, dev->type);
dev_queue->xmit_lock_owner = -1;
}
@@ -3888,6 +3902,7 @@ int register_netdevice(struct net_device *dev)
net = dev_net(dev);
spin_lock_init(&dev->addr_list_lock);
+ netdev_set_addr_lockdep_class(dev);
netdev_init_queue_locks(dev);
dev->iflink = -1;
[-- Attachment #4: 03.diff --]
[-- Type: text/x-diff, Size: 389 bytes --]
diff --git a/net/core/dev.c b/net/core/dev.c
index 2eed17b..371b1a0 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -259,7 +259,7 @@ static RAW_NOTIFIER_HEAD(netdev_chain);
DEFINE_PER_CPU(struct softnet_data, softnet_data);
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCKDEP
/*
* register_netdevice() inits txq->_xmit_lock and sets lockdep class
* according to dev->type
^ permalink raw reply related [flat|nested] 83+ messages in thread
* Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-22 13:02 ` Larry Finger
2008-07-22 14:53 ` Patrick McHardy
@ 2008-07-22 16:39 ` Larry Finger
2008-07-22 17:20 ` Patrick McHardy
1 sibling, 1 reply; 83+ messages in thread
From: Larry Finger @ 2008-07-22 16:39 UTC (permalink / raw)
To: David Miller, Patrick McHardy
Cc: torvalds, akpm, netdev, linux-kernel, linux-wireless
David and Patrick,
Here is the latest on this problem.
I pulled from Linus's tree this morning and now have git-05752-g93ded9b. The
kernel WARNING from __netif_schedule and the lockdep warning are present with or
without the patches from yesterday.
As I stated earlier, the kernel WARNING (it was a BUG then) was introduced in
commit 37437bb2 when the BUG statement was entered.
The lockdep warning started with the next commit (16361127).
I am not using any network traffic shaping. Is it correct that the faulty
condition is not that q == &noop_qdisc, but that __netif_schedule was called
when that condition exists?
The lockdep warning is:
=============================================
[ INFO: possible recursive locking detected ]
2.6.26-Linus-git-05752-g93ded9b #49
---------------------------------------------
NetworkManager/2611 is trying to acquire lock:
(&dev->addr_list_lock){-...}, at: [<ffffffff803a2ad1>] dev_mc_sync+0x19/0x57
but task is already holding lock:
(&dev->addr_list_lock){-...}, at: [<ffffffff8039e909>] dev_set_rx_mode+0x19/0x2e
other info that might help us debug this:
2 locks held by NetworkManager/2611:
#0: (rtnl_mutex){--..}, at: [<ffffffff803a8488>] rtnetlink_rcv+0x12/0x27
#1: (&dev->addr_list_lock){-...}, at: [<ffffffff8039e909>]
dev_set_rx_mode+0x19/0x2e
stack backtrace:
Pid: 2611, comm: NetworkManager Not tainted 2.6.26-Linus-git-05752-g93ded9b #49
Call Trace:
[<ffffffff80251b02>] __lock_acquire+0xb7b/0xecc
[<ffffffff80251ea4>] lock_acquire+0x51/0x6a
[<ffffffff803a2ad1>] dev_mc_sync+0x19/0x57
[<ffffffff8040b3fc>] _spin_lock_bh+0x23/0x2c
[<ffffffff803a2ad1>] dev_mc_sync+0x19/0x57
[<ffffffff8039e911>] dev_set_rx_mode+0x21/0x2e
[<ffffffff803a04da>] dev_open+0x8e/0xb0
[<ffffffff8039fe84>] dev_change_flags+0xa6/0x163
[<ffffffff803a7591>] do_setlink+0x286/0x349
[<ffffffff803a849d>] rtnetlink_rcv_msg+0x0/0x1ec
[<ffffffff803a849d>] rtnetlink_rcv_msg+0x0/0x1ec
[<ffffffff803a849d>] rtnetlink_rcv_msg+0x0/0x1ec
[<ffffffff803a77de>] rtnl_setlink+0x10b/0x10d
[<ffffffff803a849d>] rtnetlink_rcv_msg+0x0/0x1ec
[<ffffffff803b416f>] netlink_rcv_skb+0x34/0x7d
[<ffffffff803a8497>] rtnetlink_rcv+0x21/0x27
[<ffffffff803b3c6f>] netlink_unicast+0x1f0/0x261
[<ffffffff8039a58d>] __alloc_skb+0x66/0x12a
[<ffffffff803b3f48>] netlink_sendmsg+0x268/0x27b
[<ffffffff80393cb9>] sock_sendmsg+0xcb/0xe3
[<ffffffff80246aab>] autoremove_wake_function+0x0/0x2e
[<ffffffff8039b2d8>] verify_iovec+0x46/0x82
[<ffffffff80393ee8>] sys_sendmsg+0x217/0x28a
[<ffffffff80393642>] sockfd_lookup_light+0x1a/0x52
[<ffffffff80250990>] trace_hardirqs_on_caller+0xef/0x113
[<ffffffff8040af74>] trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff8020be9b>] system_call_after_swapgs+0x7b/0x80
========================================================
The logged data for the WARNING is as follows:
------------[ cut here ]------------
WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
Modules linked in: af_packet nfs lockd nfs_acl rfkill_input sunrpc
cpufreq_conservative cpufreq_userspace cpufreq_powersave powernow_k8 fuse loop
dm_mod arc4 ecb crypto_blkcipher b43 firmware_class rfkill mac80211 cfg80211
led_class input_polldev battery ac button ssb serio_raw forcedeth sr_mod cdrom
k8temp hwmon sg sd_mod ehci_hcd ohci_hcd usbcore edd fan thermal processor ext3
mbcache jbd pata_amd ahci libata scsi_mod dock
Pid: 1990, comm: b43 Not tainted 2.6.26-Linus-git-05752-g93ded9b #49
Call Trace:
[<ffffffff80233f6d>] warn_on_slowpath+0x51/0x8c
[<ffffffff8039d937>] __netif_schedule+0x2c/0x98
[<ffffffffa015445d>] ieee80211_scan_completed+0x26b/0x2f1 [mac80211]
[<ffffffffa01546de>] ieee80211_sta_scan_work+0x0/0x1b8 [mac80211]
[<ffffffff8024325e>] run_workqueue+0xf0/0x1f2
[<ffffffff8024343b>] worker_thread+0xdb/0xea
[<ffffffff80246aab>] autoremove_wake_function+0x0/0x2e
[<ffffffff80243360>] worker_thread+0x0/0xea
[<ffffffff8024678b>] kthread+0x47/0x73
[<ffffffff8040af74>] trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff8020cea9>] child_rip+0xa/0x11
[<ffffffff8020c4df>] restore_args+0x0/0x30
[<ffffffff8024671f>] kthreadd+0x188/0x1ad
[<ffffffff80246744>] kthread+0x0/0x73
[<ffffffff8020ce9f>] child_rip+0x0/0x11
---[ end trace 42d234b678daea7a ]---
Other info I have found. The call to __netif_schedule from
ieee80211_scan_completed is through the following code from
include/linux/netdevice.h:
/**
* netif_wake_queue - restart transmit
* @dev: network device
*
* Allow upper layers to call the device hard_start_xmit routine.
* Used for flow control when transmit resources are available.
*/
static inline void netif_tx_wake_queue(struct netdev_queue *dev_queue)
{
#ifdef CONFIG_NETPOLL_TRAP
if (netpoll_trap()) {
clear_bit(__QUEUE_STATE_XOFF, &dev_queue->state);
return;
}
#endif
if (test_and_clear_bit(__QUEUE_STATE_XOFF, &dev_queue->state))
__netif_schedule(dev_queue->qdisc);
}
It doesn't make any difference if CONFIG_NETPOLL_TRAP is defined or not.
Please let me know if I can provide any further information,
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-22 16:39 ` Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98() Larry Finger
@ 2008-07-22 17:20 ` Patrick McHardy
2008-07-22 18:39 ` Larry Finger
0 siblings, 1 reply; 83+ messages in thread
From: Patrick McHardy @ 2008-07-22 17:20 UTC (permalink / raw)
To: Larry Finger
Cc: David Miller, torvalds, akpm, netdev, linux-kernel,
linux-wireless
Larry Finger wrote:
> I pulled from Linus's tree this morning and now have git-05752-g93ded9b.
> The kernel WARNING from __netif_schedule and the lockdep warning are
> present with or without the patches from yesterday.
>
> As I stated earlier, the kernel WARNING (it was a BUG then) was
> introduced in commit 37437bb2 when the BUG statement was entered.
>
> The lockdep warning started with the next commit (16361127).
>
> The lockdep warning is:
>
> =============================================
> [ INFO: possible recursive locking detected ]
> 2.6.26-Linus-git-05752-g93ded9b #49
^^^^^^^^^^^^^^ dirty?
This kernel is not using the patches we sent.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-22 17:20 ` Patrick McHardy
@ 2008-07-22 18:39 ` Larry Finger
2008-07-22 18:44 ` Patrick McHardy
2008-07-22 23:04 ` David Miller
0 siblings, 2 replies; 83+ messages in thread
From: Larry Finger @ 2008-07-22 18:39 UTC (permalink / raw)
To: Patrick McHardy
Cc: David Miller, torvalds, akpm, netdev, linux-kernel,
linux-wireless
Patrick McHardy wrote:
> Larry Finger wrote:
>>
>> =============================================
>> [ INFO: possible recursive locking detected ]
>> 2.6.26-Linus-git-05752-g93ded9b #49
>
> ^^^^^^^^^^^^^^ dirty?
>
> This kernel is not using the patches we sent.
No, but they didn't make any difference. I tried with all 3 applied, then backed
them out one by one. That was the state when I posted before.
Here are the dumps with all 3 patches applied:
=============================================
[ INFO: possible recursive locking detected ]
2.6.26-Linus-05752-g93ded9b-dirty #53
---------------------------------------------
b43/1997 is trying to acquire lock:
(_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
ieee80211_scan_completed+0x130/0x2e1 [mac80211]
but task is already holding lock:
(_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
ieee80211_scan_completed+0x130/0x2e1 [mac80211]
other info that might help us debug this:
3 locks held by b43/1997:
#0: ((name)){--..}, at: [<ffffffff80245185>] run_workqueue+0xa7/0x1f2
#1: (&(&local->scan_work)->work){--..}, at: [<ffffffff80245185>]
run_workqueue+0xa7/0x1f2
#2: (_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
ieee80211_scan_completed+0x130/0x2e1 [mac80211]
stack backtrace:
Pid: 1997, comm: b43 Not tainted 2.6.26-Linus-05752-g93ded9b-dirty #53
Call Trace:
[<ffffffff80255616>] __lock_acquire+0xb7b/0xecc
[<ffffffff8040c9a0>] __mutex_unlock_slowpath+0x100/0x10b
[<ffffffff802559b8>] lock_acquire+0x51/0x6a
[<ffffffffa028f322>] ieee80211_scan_completed+0x130/0x2e1 [mac80211]
[<ffffffff8040dc08>] _spin_lock+0x1e/0x27
[<ffffffffa028f322>] ieee80211_scan_completed+0x130/0x2e1 [mac80211]
[<ffffffffa028f6ce>] ieee80211_sta_scan_work+0x0/0x1b8 [mac80211]
[<ffffffff802451ce>] run_workqueue+0xf0/0x1f2
[<ffffffff802453ab>] worker_thread+0xdb/0xea
[<ffffffff80248a5f>] autoremove_wake_function+0x0/0x2e
[<ffffffff802452d0>] worker_thread+0x0/0xea
[<ffffffff80248731>] kthread+0x47/0x73
[<ffffffff8040d7b1>] trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff8020ceb9>] child_rip+0xa/0x11
[<ffffffff8020c4ef>] restore_args+0x0/0x30
[<ffffffff802486c5>] kthreadd+0x19a/0x1bf
[<ffffffff802486ea>] kthread+0x0/0x73
[<ffffffff8020ceaf>] child_rip+0x0/0x11
------------[ cut here ]------------
WARNING: at net/core/dev.c:1344 __netif_schedule+0x2c/0x98()
Modules linked in: af_packet rfkill_input nfs lockd nfs_acl sunrpc
cpufreq_conservative cpufreq_userspace cpufreq_powersave powernow_k8 fuse loop
dm_mod arc4 ecb crypto_blkcipher b43 firmware_class rfkill mac80211
snd_hda_intel cfg80211 led_class input_polldev k8temp snd_pcm snd_timer battery
hwmon sr_mod forcedeth ssb joydev ac button serio_raw cdrom snd soundcore
snd_page_alloc sg sd_mod ohci_hcd ehci_hcd usbcore edd fan thermal processor
ext3 mbcache jbd pata_amd ahci libata scsi_mod dock
Pid: 1997, comm: b43 Not tainted 2.6.26-Linus-05752-g93ded9b-dirty #53
Call Trace:
[<ffffffff80235d49>] warn_on_slowpath+0x51/0x8c
[<ffffffff8040d7b1>] trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff803a2413>] __netif_schedule+0x2c/0x98
[<ffffffffa028f44d>] ieee80211_scan_completed+0x25b/0x2e1 [mac80211]
[<ffffffffa028f6ce>] ieee80211_sta_scan_work+0x0/0x1b8 [mac80211]
[<ffffffff802451ce>] run_workqueue+0xf0/0x1f2
[<ffffffff802453ab>] worker_thread+0xdb/0xea
[<ffffffff80248a5f>] autoremove_wake_function+0x0/0x2e
[<ffffffff802452d0>] worker_thread+0x0/0xea
[<ffffffff80248731>] kthread+0x47/0x73
[<ffffffff8040d7b1>] trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff8020ceb9>] child_rip+0xa/0x11
[<ffffffff8020c4ef>] restore_args+0x0/0x30
[<ffffffff802486c5>] kthreadd+0x19a/0x1bf
[<ffffffff802486ea>] kthread+0x0/0x73
[<ffffffff8020ceaf>] child_rip+0x0/0x11
---[ end trace 88fab857dc2a4242 ]---
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-22 18:39 ` Larry Finger
@ 2008-07-22 18:44 ` Patrick McHardy
2008-07-22 19:30 ` Larry Finger
2008-07-22 23:04 ` David Miller
1 sibling, 1 reply; 83+ messages in thread
From: Patrick McHardy @ 2008-07-22 18:44 UTC (permalink / raw)
To: Larry Finger
Cc: David Miller, torvalds, akpm, netdev, linux-kernel,
linux-wireless
Larry Finger wrote:
> Patrick McHardy wrote:
>> Larry Finger wrote:
>>>
>>> =============================================
>>> [ INFO: possible recursive locking detected ]
>>> 2.6.26-Linus-git-05752-g93ded9b #49
>>
>> ^^^^^^^^^^^^^^ dirty?
>>
>> This kernel is not using the patches we sent.
>
> No, but they didn't make any difference. I tried with all 3 applied,
> then backed them out one by one. That was the state when I posted before.
Well, this is a completely different warning.
>
> Here are the dumps with all 3 patches applied:
>
> =============================================
> [ INFO: possible recursive locking detected ]
> 2.6.26-Linus-05752-g93ded9b-dirty #53
> ---------------------------------------------
> b43/1997 is trying to acquire lock:
> (_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
> ieee80211_scan_completed+0x130/0x2e1 [mac80211]
>
> but task is already holding lock:
> (_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
> ieee80211_scan_completed+0x130/0x2e1 [mac80211]
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-22 18:44 ` Patrick McHardy
@ 2008-07-22 19:30 ` Larry Finger
0 siblings, 0 replies; 83+ messages in thread
From: Larry Finger @ 2008-07-22 19:30 UTC (permalink / raw)
To: Patrick McHardy
Cc: David Miller, torvalds, akpm, netdev, linux-kernel,
linux-wireless
Patrick McHardy wrote:
> Larry Finger wrote:
>> Patrick McHardy wrote:
>>> Larry Finger wrote:
>>>>
>>>> =============================================
>>>> [ INFO: possible recursive locking detected ]
>>>> 2.6.26-Linus-git-05752-g93ded9b #49
>>>
>>> ^^^^^^^^^^^^^^ dirty?
>>>
>>> This kernel is not using the patches we sent.
>>
>> No, but they didn't make any difference. I tried with all 3 applied,
>> then backed them out one by one. That was the state when I posted before.
>
> Well, this is a completely different warning.
>
>>
>> Here are the dumps with all 3 patches applied:
>>
>> =============================================
>> [ INFO: possible recursive locking detected ]
>> 2.6.26-Linus-05752-g93ded9b-dirty #53
>> ---------------------------------------------
>> b43/1997 is trying to acquire lock:
>> (_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
>> ieee80211_scan_completed+0x130/0x2e1 [mac80211]
>>
>> but task is already holding lock:
>> (_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
>> ieee80211_scan_completed+0x130/0x2e1 [mac80211]
Sorry. After all those kernel builds I got bleary-eyed and just looked for
recursive locking without any regard for the details.
Larry
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-22 12:52 ` Larry Finger
@ 2008-07-22 20:43 ` David Miller
0 siblings, 0 replies; 83+ messages in thread
From: David Miller @ 2008-07-22 20:43 UTC (permalink / raw)
To: Larry.Finger
Cc: kaber, mingo, ischram, torvalds, akpm, netdev, linux-kernel,
linux-wireless, j
From: Larry Finger <Larry.Finger@lwfinger.net>
Date: Tue, 22 Jul 2008 07:52:24 -0500
> David Miller wrote:
> > From: Larry Finger <Larry.Finger@lwfinger.net>
> > Date: Tue, 22 Jul 2008 01:34:28 -0500
> >
> >> David Miller wrote:
> >>> From: Larry Finger <Larry.Finger@lwfinger.net>
> >>> Date: Mon, 21 Jul 2008 17:40:10 -0500
> >>>
> >>>> Sorry :(
> >>>>
> >>>> I used the davem patch, the second version of your first one, and your second
> >>>> one. Both problems persist.
> >>>>
> >>>> Still plugging away on bisection.
> >>> GIT bisecting the lockdep problem is surely going the land you on:
> >>>
> >>> commit e308a5d806c852f56590ffdd3834d0df0cbed8d7
> >> No. It landed on this one.
> >
> > For the lockdep warnings?
>
> No - this one triggers the kernel BUG as follows:
Well, we were trying to fix the lockdep warnings. One
thing at a time :-)
And that BUG is now just a WARN in Linus's tree, so if
you update you should get further along.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
2008-07-22 14:53 ` Patrick McHardy
@ 2008-07-22 21:17 ` David Miller
0 siblings, 0 replies; 83+ messages in thread
From: David Miller @ 2008-07-22 21:17 UTC (permalink / raw)
To: kaber
Cc: Larry.Finger, mingo, ischram, torvalds, akpm, netdev,
linux-kernel, linux-wireless, j
From: Patrick McHardy <kaber@trash.net>
Date: Tue, 22 Jul 2008 16:53:30 +0200
> Could you please retry with the three patches attached to this
> mail? If the lockdep warning still triggers, please post it again.
Since I'm convinced the original lockdep spurious warning
issue is cured, and we're looking at something different
now, I've integrated all of these fixes together as one
commit as below.
Thanks a lot Patrick.
netdev: Handle ->addr_list_lock just like ->_xmit_lock for lockdep.
The new address list lock needs to handle the same device layering
issues that the _xmit_lock one does.
This integrates work done by Patrick McHardy.
Signed-off-by: David S. Miller <davem@davemloft.net>
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 9737c06..a641eea 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -5041,6 +5041,7 @@ static int bond_check_params(struct bond_params *params)
}
static struct lock_class_key bonding_netdev_xmit_lock_key;
+static struct lock_class_key bonding_netdev_addr_lock_key;
static void bond_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -5052,6 +5053,8 @@ static void bond_set_lockdep_class_one(struct net_device *dev,
static void bond_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock,
+ &bonding_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, bond_set_lockdep_class_one, NULL);
}
diff --git a/drivers/net/hamradio/bpqether.c b/drivers/net/hamradio/bpqether.c
index b6500b2..58f4b1d 100644
--- a/drivers/net/hamradio/bpqether.c
+++ b/drivers/net/hamradio/bpqether.c
@@ -123,6 +123,7 @@ static LIST_HEAD(bpq_devices);
* off into a separate class since they always nest.
*/
static struct lock_class_key bpq_netdev_xmit_lock_key;
+static struct lock_class_key bpq_netdev_addr_lock_key;
static void bpq_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -133,6 +134,7 @@ static void bpq_set_lockdep_class_one(struct net_device *dev,
static void bpq_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock, &bpq_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, bpq_set_lockdep_class_one, NULL);
}
diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
index efbc155..4239450 100644
--- a/drivers/net/macvlan.c
+++ b/drivers/net/macvlan.c
@@ -276,6 +276,7 @@ static int macvlan_change_mtu(struct net_device *dev, int new_mtu)
* separate class since they always nest.
*/
static struct lock_class_key macvlan_netdev_xmit_lock_key;
+static struct lock_class_key macvlan_netdev_addr_lock_key;
#define MACVLAN_FEATURES \
(NETIF_F_SG | NETIF_F_ALL_CSUM | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
@@ -295,6 +296,8 @@ static void macvlan_set_lockdep_class_one(struct net_device *dev,
static void macvlan_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock,
+ &macvlan_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, macvlan_set_lockdep_class_one, NULL);
}
diff --git a/drivers/net/wireless/hostap/hostap_hw.c b/drivers/net/wireless/hostap/hostap_hw.c
index 13d5882..3153fe9 100644
--- a/drivers/net/wireless/hostap/hostap_hw.c
+++ b/drivers/net/wireless/hostap/hostap_hw.c
@@ -3101,6 +3101,7 @@ static void prism2_clear_set_tim_queue(local_info_t *local)
* This is a natural nesting, which needs a split lock type.
*/
static struct lock_class_key hostap_netdev_xmit_lock_key;
+static struct lock_class_key hostap_netdev_addr_lock_key;
static void prism2_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -3112,6 +3113,8 @@ static void prism2_set_lockdep_class_one(struct net_device *dev,
static void prism2_set_lockdep_class(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock,
+ &hostap_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, prism2_set_lockdep_class_one, NULL);
}
diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
index f42bc2b..4bf014e 100644
--- a/net/8021q/vlan_dev.c
+++ b/net/8021q/vlan_dev.c
@@ -569,6 +569,7 @@ static void vlan_dev_set_rx_mode(struct net_device *vlan_dev)
* separate class since they always nest.
*/
static struct lock_class_key vlan_netdev_xmit_lock_key;
+static struct lock_class_key vlan_netdev_addr_lock_key;
static void vlan_dev_set_lockdep_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -581,6 +582,9 @@ static void vlan_dev_set_lockdep_one(struct net_device *dev,
static void vlan_dev_set_lockdep_class(struct net_device *dev, int subclass)
{
+ lockdep_set_class_and_subclass(&dev->addr_list_lock,
+ &vlan_netdev_addr_lock_key,
+ subclass);
netdev_for_each_tx_queue(dev, vlan_dev_set_lockdep_one, &subclass);
}
diff --git a/net/core/dev.c b/net/core/dev.c
index 65eea83..6bf217d 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -261,7 +261,7 @@ static RAW_NOTIFIER_HEAD(netdev_chain);
DEFINE_PER_CPU(struct softnet_data, softnet_data);
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#ifdef CONFIG_LOCKDEP
/*
* register_netdevice() inits txq->_xmit_lock and sets lockdep class
* according to dev->type
@@ -301,6 +301,7 @@ static const char *netdev_lock_name[] =
"_xmit_NONE"};
static struct lock_class_key netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
+static struct lock_class_key netdev_addr_lock_key[ARRAY_SIZE(netdev_lock_type)];
static inline unsigned short netdev_lock_pos(unsigned short dev_type)
{
@@ -313,8 +314,8 @@ static inline unsigned short netdev_lock_pos(unsigned short dev_type)
return ARRAY_SIZE(netdev_lock_type) - 1;
}
-static inline void netdev_set_lockdep_class(spinlock_t *lock,
- unsigned short dev_type)
+static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock,
+ unsigned short dev_type)
{
int i;
@@ -322,9 +323,22 @@ static inline void netdev_set_lockdep_class(spinlock_t *lock,
lockdep_set_class_and_name(lock, &netdev_xmit_lock_key[i],
netdev_lock_name[i]);
}
+
+static inline void netdev_set_addr_lockdep_class(struct net_device *dev)
+{
+ int i;
+
+ i = netdev_lock_pos(dev->type);
+ lockdep_set_class_and_name(&dev->addr_list_lock,
+ &netdev_addr_lock_key[i],
+ netdev_lock_name[i]);
+}
#else
-static inline void netdev_set_lockdep_class(spinlock_t *lock,
- unsigned short dev_type)
+static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock,
+ unsigned short dev_type)
+{
+}
+static inline void netdev_set_addr_lockdep_class(struct net_device *dev)
{
}
#endif
@@ -3851,7 +3865,7 @@ static void __netdev_init_queue_locks_one(struct net_device *dev,
void *_unused)
{
spin_lock_init(&dev_queue->_xmit_lock);
- netdev_set_lockdep_class(&dev_queue->_xmit_lock, dev->type);
+ netdev_set_xmit_lockdep_class(&dev_queue->_xmit_lock, dev->type);
dev_queue->xmit_lock_owner = -1;
}
@@ -3896,6 +3910,7 @@ int register_netdevice(struct net_device *dev)
net = dev_net(dev);
spin_lock_init(&dev->addr_list_lock);
+ netdev_set_addr_lockdep_class(dev);
netdev_init_queue_locks(dev);
dev->iflink = -1;
diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
index fccc250..532e4fa 100644
--- a/net/netrom/af_netrom.c
+++ b/net/netrom/af_netrom.c
@@ -73,6 +73,7 @@ static const struct proto_ops nr_proto_ops;
* separate class since they always nest.
*/
static struct lock_class_key nr_netdev_xmit_lock_key;
+static struct lock_class_key nr_netdev_addr_lock_key;
static void nr_set_lockdep_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -83,6 +84,7 @@ static void nr_set_lockdep_one(struct net_device *dev,
static void nr_set_lockdep_key(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock, &nr_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, nr_set_lockdep_one, NULL);
}
diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
index dbc963b..a7f1ce1 100644
--- a/net/rose/af_rose.c
+++ b/net/rose/af_rose.c
@@ -74,6 +74,7 @@ ax25_address rose_callsign;
* separate class since they always nest.
*/
static struct lock_class_key rose_netdev_xmit_lock_key;
+static struct lock_class_key rose_netdev_addr_lock_key;
static void rose_set_lockdep_one(struct net_device *dev,
struct netdev_queue *txq,
@@ -84,6 +85,7 @@ static void rose_set_lockdep_one(struct net_device *dev,
static void rose_set_lockdep_key(struct net_device *dev)
{
+ lockdep_set_class(&dev->addr_list_lock, &rose_netdev_addr_lock_key);
netdev_for_each_tx_queue(dev, rose_set_lockdep_one, NULL);
}
^ permalink raw reply related [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-22 18:39 ` Larry Finger
2008-07-22 18:44 ` Patrick McHardy
@ 2008-07-22 23:04 ` David Miller
2008-07-23 6:20 ` Jarek Poplawski
1 sibling, 1 reply; 83+ messages in thread
From: David Miller @ 2008-07-22 23:04 UTC (permalink / raw)
To: Larry.Finger; +Cc: kaber, torvalds, akpm, netdev, linux-kernel, linux-wireless
From: Larry Finger <Larry.Finger@lwfinger.net>
Date: Tue, 22 Jul 2008 13:39:08 -0500
> =============================================
> [ INFO: possible recursive locking detected ]
> 2.6.26-Linus-05752-g93ded9b-dirty #53
> ---------------------------------------------
> b43/1997 is trying to acquire lock:
> (_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
> ieee80211_scan_completed+0x130/0x2e1 [mac80211]
>
> but task is already holding lock:
> (_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
> ieee80211_scan_completed+0x130/0x2e1 [mac80211]
>
> other info that might help us debug this:
> 3 locks held by b43/1997:
> #0: ((name)){--..}, at: [<ffffffff80245185>] run_workqueue+0xa7/0x1f2
> #1: (&(&local->scan_work)->work){--..}, at: [<ffffffff80245185>]
> run_workqueue+0xa7/0x1f2
> #2: (_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
> ieee80211_scan_completed+0x130/0x2e1 [mac80211]
>
> stack backtrace:
> Pid: 1997, comm: b43 Not tainted 2.6.26-Linus-05752-g93ded9b-dirty #53
>
> Call Trace:
> [<ffffffff80255616>] __lock_acquire+0xb7b/0xecc
> [<ffffffff8040c9a0>] __mutex_unlock_slowpath+0x100/0x10b
> [<ffffffff802559b8>] lock_acquire+0x51/0x6a
> [<ffffffffa028f322>] ieee80211_scan_completed+0x130/0x2e1 [mac80211]
> [<ffffffff8040dc08>] _spin_lock+0x1e/0x27
> [<ffffffffa028f322>] ieee80211_scan_completed+0x130/0x2e1 [mac80211]
> [<ffffffffa028f6ce>] ieee80211_sta_scan_work+0x0/0x1b8 [mac80211]
> [<ffffffff802451ce>] run_workqueue+0xf0/0x1f2
> [<ffffffff802453ab>] worker_thread+0xdb/0xea
> [<ffffffff80248a5f>] autoremove_wake_function+0x0/0x2e
> [<ffffffff802452d0>] worker_thread+0x0/0xea
> [<ffffffff80248731>] kthread+0x47/0x73
> [<ffffffff8040d7b1>] trace_hardirqs_on_thunk+0x3a/0x3f
> [<ffffffff8020ceb9>] child_rip+0xa/0x11
> [<ffffffff8020c4ef>] restore_args+0x0/0x30
> [<ffffffff802486c5>] kthreadd+0x19a/0x1bf
> [<ffffffff802486ea>] kthread+0x0/0x73
> [<ffffffff8020ceaf>] child_rip+0x0/0x11
Lockdep doesn't like that we have an array of objects (the TX queues)
and we're iterating over them grabbing all of their locks.
Does anyone know how to teach lockdep that this is OK?
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-22 23:04 ` David Miller
@ 2008-07-23 6:20 ` Jarek Poplawski
2008-07-23 7:59 ` David Miller
0 siblings, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-23 6:20 UTC (permalink / raw)
To: David Miller
Cc: Larry.Finger, kaber, torvalds, akpm, netdev, linux-kernel,
linux-wireless
On 23-07-2008 01:04, David Miller wrote:
> From: Larry Finger <Larry.Finger@lwfinger.net>
> Date: Tue, 22 Jul 2008 13:39:08 -0500
>
>> =============================================
>> [ INFO: possible recursive locking detected ]
>> 2.6.26-Linus-05752-g93ded9b-dirty #53
>> ---------------------------------------------
>> b43/1997 is trying to acquire lock:
>> (_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
>> ieee80211_scan_completed+0x130/0x2e1 [mac80211]
>>
>> but task is already holding lock:
>> (_xmit_IEEE80211#2){-...}, at: [<ffffffffa028f322>]
>> ieee80211_scan_completed+0x130/0x2e1 [mac80211]
...
> Lockdep doesn't like that we have an array of objects (the TX queues)
> and we're iterating over them grabbing all of their locks.
>
> Does anyone know how to teach lockdep that this is OK?
I guess, David Miller knows...:
http://permalink.gmane.org/gmane.linux.network/99784
Jarek P.
PS: if there is nothing new in lockdep the classical method would
be to change this static array:
static struct lock_class_key
netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
to
static struct lock_class_key
netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)][MAX_NUM_TX_QUEUES];
and set lockdep classes per queue as well. (If we are sure we don't
need lockdep subclasses anywhere this could be optimized by using
one lock_class_key per 8 queues and spin_lock_nested()).
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 6:20 ` Jarek Poplawski
@ 2008-07-23 7:59 ` David Miller
2008-07-23 8:54 ` Jarek Poplawski
0 siblings, 1 reply; 83+ messages in thread
From: David Miller @ 2008-07-23 7:59 UTC (permalink / raw)
To: jarkao2
Cc: Larry.Finger, kaber, torvalds, akpm, netdev, linux-kernel,
linux-wireless
From: Jarek Poplawski <jarkao2@gmail.com>
Date: Wed, 23 Jul 2008 06:20:36 +0000
> PS: if there is nothing new in lockdep the classical method would
> be to change this static array:
>
> static struct lock_class_key
> netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
>
> to
>
> static struct lock_class_key
> netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)][MAX_NUM_TX_QUEUES];
>
> and set lockdep classes per queue as well. (If we are sure we don't
> need lockdep subclasses anywhere this could be optimized by using
> one lock_class_key per 8 queues and spin_lock_nested()).
Unfortunately MAX_NUM_TX_QUEUES is USHORT_MAX, so this isn't really
a feasible approach.
spin_lock_nested() isn't all that viable either, as the subclass
limit is something like 8.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 7:59 ` David Miller
@ 2008-07-23 8:54 ` Jarek Poplawski
2008-07-23 9:03 ` Peter Zijlstra
0 siblings, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-23 8:54 UTC (permalink / raw)
To: David Miller
Cc: Larry.Finger, kaber, torvalds, akpm, netdev, linux-kernel,
linux-wireless, peterz, mingo
On Wed, Jul 23, 2008 at 12:59:21AM -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@gmail.com>
> Date: Wed, 23 Jul 2008 06:20:36 +0000
>
> > PS: if there is nothing new in lockdep the classical method would
> > be to change this static array:
> >
> > static struct lock_class_key
> > netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
> >
> > to
> >
> > static struct lock_class_key
> > netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)][MAX_NUM_TX_QUEUES];
> >
> > and set lockdep classes per queue as well. (If we are sure we don't
> > need lockdep subclasses anywhere this could be optimized by using
> > one lock_class_key per 8 queues and spin_lock_nested()).
>
> Unfortunately MAX_NUM_TX_QUEUES is USHORT_MAX, so this isn't really
> a feasible approach.
Is it used by real devices already? Maybe for the beginning we could
start with something less?
> spin_lock_nested() isn't all that viable either, as the subclass
> limit is something like 8.
This method would need to do some additional counting: depending of
a queue number each 8 subsequent queues share (are set to) the same
class and their number mod 8 gives the subqueue number for
spin_lock_nested().
I'll try to find if there is something new around this in lockdep.
(lockdep people added to CC.)
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 8:54 ` Jarek Poplawski
@ 2008-07-23 9:03 ` Peter Zijlstra
2008-07-23 9:35 ` Jarek Poplawski
0 siblings, 1 reply; 83+ messages in thread
From: Peter Zijlstra @ 2008-07-23 9:03 UTC (permalink / raw)
To: Jarek Poplawski
Cc: David Miller, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
On Wed, 2008-07-23 at 08:54 +0000, Jarek Poplawski wrote:
> On Wed, Jul 23, 2008 at 12:59:21AM -0700, David Miller wrote:
> > From: Jarek Poplawski <jarkao2@gmail.com>
> > Date: Wed, 23 Jul 2008 06:20:36 +0000
> >
> > > PS: if there is nothing new in lockdep the classical method would
> > > be to change this static array:
> > >
> > > static struct lock_class_key
> > > netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
> > >
> > > to
> > >
> > > static struct lock_class_key
> > > netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)][MAX_NUM_TX_QUEUES];
> > >
> > > and set lockdep classes per queue as well. (If we are sure we don't
> > > need lockdep subclasses anywhere this could be optimized by using
> > > one lock_class_key per 8 queues and spin_lock_nested()).
> >
> > Unfortunately MAX_NUM_TX_QUEUES is USHORT_MAX, so this isn't really
> > a feasible approach.
>
> Is it used by real devices already? Maybe for the beginning we could
> start with something less?
>
> > spin_lock_nested() isn't all that viable either, as the subclass
> > limit is something like 8.
>
> This method would need to do some additional counting: depending of
> a queue number each 8 subsequent queues share (are set to) the same
> class and their number mod 8 gives the subqueue number for
> spin_lock_nested().
>
> I'll try to find if there is something new around this in lockdep.
> (lockdep people added to CC.)
There isn't.
Is there a static data structure that the driver needs to instantiate to
'create' a queue? Something like:
/* this imaginary e1000 hardware has 16 hardware queues */
static struct net_tx_queue e1000e_tx_queues[16];
In that case you can stick the key in there and do:
int e1000e_init_tx_queue(struct net_tx_queue *txq)
{
...
spin_lock_init(&txq->tx_lock);
lockdep_set_class(&txq->tx_lock, &txq->tx_lock_key);
...
}
( This is what the scheduler runqueues also do )
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 9:03 ` Peter Zijlstra
@ 2008-07-23 9:35 ` Jarek Poplawski
2008-07-23 9:50 ` Peter Zijlstra
0 siblings, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-23 9:35 UTC (permalink / raw)
To: Peter Zijlstra
Cc: David Miller, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
On Wed, Jul 23, 2008 at 11:03:06AM +0200, Peter Zijlstra wrote:
> On Wed, 2008-07-23 at 08:54 +0000, Jarek Poplawski wrote:
> > On Wed, Jul 23, 2008 at 12:59:21AM -0700, David Miller wrote:
> > > From: Jarek Poplawski <jarkao2@gmail.com>
> > > Date: Wed, 23 Jul 2008 06:20:36 +0000
> > >
> > > > PS: if there is nothing new in lockdep the classical method would
> > > > be to change this static array:
> > > >
> > > > static struct lock_class_key
> > > > netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
> > > >
> > > > to
> > > >
> > > > static struct lock_class_key
> > > > netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)][MAX_NUM_TX_QUEUES];
> > > >
> > > > and set lockdep classes per queue as well. (If we are sure we don't
> > > > need lockdep subclasses anywhere this could be optimized by using
> > > > one lock_class_key per 8 queues and spin_lock_nested()).
> > >
> > > Unfortunately MAX_NUM_TX_QUEUES is USHORT_MAX, so this isn't really
> > > a feasible approach.
> >
> > Is it used by real devices already? Maybe for the beginning we could
> > start with something less?
> >
> > > spin_lock_nested() isn't all that viable either, as the subclass
> > > limit is something like 8.
> >
> > This method would need to do some additional counting: depending of
> > a queue number each 8 subsequent queues share (are set to) the same
> > class and their number mod 8 gives the subqueue number for
> > spin_lock_nested().
> >
> > I'll try to find if there is something new around this in lockdep.
> > (lockdep people added to CC.)
>
> There isn't.
>
> Is there a static data structure that the driver needs to instantiate to
> 'create' a queue? Something like:
>
> /* this imaginary e1000 hardware has 16 hardware queues */
> static struct net_tx_queue e1000e_tx_queues[16];
I guess, not.
Then, IMHO, we could be practical and simply skip lockdep validation
for "some" range of locks: e.g. to set the table for the first 256
queues only, and to do e.g. __raw_spin_lock() for bigger numbers. (If
there are any bad locking patterns this should be enough for checking.)
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 9:35 ` Jarek Poplawski
@ 2008-07-23 9:50 ` Peter Zijlstra
2008-07-23 10:13 ` Jarek Poplawski
0 siblings, 1 reply; 83+ messages in thread
From: Peter Zijlstra @ 2008-07-23 9:50 UTC (permalink / raw)
To: Jarek Poplawski
Cc: David Miller, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
On Wed, 2008-07-23 at 09:35 +0000, Jarek Poplawski wrote:
> On Wed, Jul 23, 2008 at 11:03:06AM +0200, Peter Zijlstra wrote:
> > On Wed, 2008-07-23 at 08:54 +0000, Jarek Poplawski wrote:
> > > On Wed, Jul 23, 2008 at 12:59:21AM -0700, David Miller wrote:
> > > > From: Jarek Poplawski <jarkao2@gmail.com>
> > > > Date: Wed, 23 Jul 2008 06:20:36 +0000
> > > >
> > > > > PS: if there is nothing new in lockdep the classical method would
> > > > > be to change this static array:
> > > > >
> > > > > static struct lock_class_key
> > > > > netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)];
> > > > >
> > > > > to
> > > > >
> > > > > static struct lock_class_key
> > > > > netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)][MAX_NUM_TX_QUEUES];
> > > > >
> > > > > and set lockdep classes per queue as well. (If we are sure we don't
> > > > > need lockdep subclasses anywhere this could be optimized by using
> > > > > one lock_class_key per 8 queues and spin_lock_nested()).
> > > >
> > > > Unfortunately MAX_NUM_TX_QUEUES is USHORT_MAX, so this isn't really
> > > > a feasible approach.
> > >
> > > Is it used by real devices already? Maybe for the beginning we could
> > > start with something less?
> > >
> > > > spin_lock_nested() isn't all that viable either, as the subclass
> > > > limit is something like 8.
> > >
> > > This method would need to do some additional counting: depending of
> > > a queue number each 8 subsequent queues share (are set to) the same
> > > class and their number mod 8 gives the subqueue number for
> > > spin_lock_nested().
> > >
> > > I'll try to find if there is something new around this in lockdep.
> > > (lockdep people added to CC.)
> >
> > There isn't.
> >
> > Is there a static data structure that the driver needs to instantiate to
> > 'create' a queue? Something like:
> >
> > /* this imaginary e1000 hardware has 16 hardware queues */
> > static struct net_tx_queue e1000e_tx_queues[16];
>
> I guess, not.
>
> Then, IMHO, we could be practical and simply skip lockdep validation
> for "some" range of locks: e.g. to set the table for the first 256
> queues only, and to do e.g. __raw_spin_lock() for bigger numbers. (If
> there are any bad locking patterns this should be enough for checking.)
definite NAK on using raw spinlock ops...
I'll go look at this multiqueue stuff to see if there is anything to be
done.. David, what would be a good commit to start reading?
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 9:50 ` Peter Zijlstra
@ 2008-07-23 10:13 ` Jarek Poplawski
2008-07-23 10:58 ` Peter Zijlstra
0 siblings, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-23 10:13 UTC (permalink / raw)
To: Peter Zijlstra
Cc: David Miller, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
On Wed, Jul 23, 2008 at 11:50:14AM +0200, Peter Zijlstra wrote:
...
> definite NAK on using raw spinlock ops...
>
> I'll go look at this multiqueue stuff to see if there is anything to be
> done.. David, what would be a good commit to start reading?
...In the case David ever sleeps: I think, the current Linus' git is
good enough (the problem is with netdevice.h: netif_tx_lock()).
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 10:13 ` Jarek Poplawski
@ 2008-07-23 10:58 ` Peter Zijlstra
2008-07-23 11:35 ` Jarek Poplawski
2008-07-23 20:14 ` David Miller
0 siblings, 2 replies; 83+ messages in thread
From: Peter Zijlstra @ 2008-07-23 10:58 UTC (permalink / raw)
To: Jarek Poplawski
Cc: David Miller, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
On Wed, 2008-07-23 at 10:13 +0000, Jarek Poplawski wrote:
> On Wed, Jul 23, 2008 at 11:50:14AM +0200, Peter Zijlstra wrote:
> ....
> > definite NAK on using raw spinlock ops...
> >
> > I'll go look at this multiqueue stuff to see if there is anything to be
> > done.. David, what would be a good commit to start reading?
>
> ....In the case David ever sleeps: I think, the current Linus' git is
> good enough (the problem is with netdevice.h: netif_tx_lock()).
Ah, right,...
that takes a whole bunch of locks at once..
Is that really needed? - when I grep for its usage its surprisingly few
drivers using it and even less generic code.
When I look at the mac802.11 code in ieee80211_tx_pending() it looks
like it can do with just one lock at a time, instead of all - but I
might be missing some obvious details.
So I guess my question is, is netif_tx_lock() here to stay, or is the
right fix to convert all those drivers to use __netif_tx_lock() which
locks only a single queue?
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 10:58 ` Peter Zijlstra
@ 2008-07-23 11:35 ` Jarek Poplawski
2008-07-23 11:49 ` Jarek Poplawski
2008-07-23 20:14 ` David Miller
1 sibling, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-23 11:35 UTC (permalink / raw)
To: Peter Zijlstra
Cc: David Miller, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
On Wed, Jul 23, 2008 at 12:58:16PM +0200, Peter Zijlstra wrote:
...
> Ah, right,...
>
> that takes a whole bunch of locks at once..
>
> Is that really needed? - when I grep for its usage its surprisingly few
> drivers using it and even less generic code.
>
> When I look at the mac802.11 code in ieee80211_tx_pending() it looks
> like it can do with just one lock at a time, instead of all - but I
> might be missing some obvious details.
>
> So I guess my question is, is netif_tx_lock() here to stay, or is the
> right fix to convert all those drivers to use __netif_tx_lock() which
> locks only a single queue?
>
It's a new thing mainly for new hardware/drivers, and just after
conversion (older drivers effectively use __netif_tx_lock()), so it'll
probably stay for some time until something better is found. David,
will tell the rest, I hope.
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 11:35 ` Jarek Poplawski
@ 2008-07-23 11:49 ` Jarek Poplawski
2008-07-23 20:16 ` David Miller
0 siblings, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-23 11:49 UTC (permalink / raw)
To: Peter Zijlstra
Cc: David Miller, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
On Wed, Jul 23, 2008 at 11:35:19AM +0000, Jarek Poplawski wrote:
> On Wed, Jul 23, 2008 at 12:58:16PM +0200, Peter Zijlstra wrote:
...
> > When I look at the mac802.11 code in ieee80211_tx_pending() it looks
> > like it can do with just one lock at a time, instead of all - but I
> > might be missing some obvious details.
> >
> > So I guess my question is, is netif_tx_lock() here to stay, or is the
> > right fix to convert all those drivers to use __netif_tx_lock() which
> > locks only a single queue?
> >
>
> It's a new thing mainly for new hardware/drivers, and just after
> conversion (older drivers effectively use __netif_tx_lock()), so it'll
> probably stay for some time until something better is found. David,
> will tell the rest, I hope.
...And, of course, these new drivers should also lock a single queue
where possible.
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 10:58 ` Peter Zijlstra
2008-07-23 11:35 ` Jarek Poplawski
@ 2008-07-23 20:14 ` David Miller
2008-07-24 7:00 ` Peter Zijlstra
2008-07-25 17:04 ` Ingo Oeser
1 sibling, 2 replies; 83+ messages in thread
From: David Miller @ 2008-07-23 20:14 UTC (permalink / raw)
To: peterz
Cc: jarkao2, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
From: Peter Zijlstra <peterz@infradead.org>
Date: Wed, 23 Jul 2008 12:58:16 +0200
> So I guess my question is, is netif_tx_lock() here to stay, or is the
> right fix to convert all those drivers to use __netif_tx_lock() which
> locks only a single queue?
It's staying.
It's trying to block all potential calls into the ->hard_start_xmit()
method of the driver, and the only reliable way to do that is to take
all the TX queue locks. And in one form or another, we're going to
have this "grab/release all the TX queue locks" construct.
I find it interesting that this cannot be simply described to lockdep
:-)
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 11:49 ` Jarek Poplawski
@ 2008-07-23 20:16 ` David Miller
2008-07-23 20:43 ` Jarek Poplawski
2008-07-24 9:10 ` Peter Zijlstra
0 siblings, 2 replies; 83+ messages in thread
From: David Miller @ 2008-07-23 20:16 UTC (permalink / raw)
To: jarkao2
Cc: peterz, Larry.Finger, kaber, torvalds, akpm, netdev, linux-kernel,
linux-wireless, mingo
From: Jarek Poplawski <jarkao2@gmail.com>
Date: Wed, 23 Jul 2008 11:49:14 +0000
> On Wed, Jul 23, 2008 at 11:35:19AM +0000, Jarek Poplawski wrote:
> > On Wed, Jul 23, 2008 at 12:58:16PM +0200, Peter Zijlstra wrote:
> ...
> > > When I look at the mac802.11 code in ieee80211_tx_pending() it looks
> > > like it can do with just one lock at a time, instead of all - but I
> > > might be missing some obvious details.
> > >
> > > So I guess my question is, is netif_tx_lock() here to stay, or is the
> > > right fix to convert all those drivers to use __netif_tx_lock() which
> > > locks only a single queue?
> > >
> >
> > It's a new thing mainly for new hardware/drivers, and just after
> > conversion (older drivers effectively use __netif_tx_lock()), so it'll
> > probably stay for some time until something better is found. David,
> > will tell the rest, I hope.
>
> ...And, of course, these new drivers should also lock a single queue
> where possible.
It isn't going away.
There will always be a need for a "stop all the TX queues" operation.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 20:16 ` David Miller
@ 2008-07-23 20:43 ` Jarek Poplawski
2008-07-23 20:55 ` David Miller
2008-07-24 9:10 ` Peter Zijlstra
1 sibling, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-23 20:43 UTC (permalink / raw)
To: David Miller
Cc: peterz, Larry.Finger, kaber, torvalds, akpm, netdev, linux-kernel,
linux-wireless, mingo
On Wed, Jul 23, 2008 at 01:16:07PM -0700, David Miller wrote:
...
> There will always be a need for a "stop all the TX queues" operation.
The question is if the current way is "all correct". As a matter of
fact I think Peter's doubts could be justified: taking "USHORT_MAX"
locks looks really dubious (so maybe it's not so strange lockdep
didn't get used to this).
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 20:43 ` Jarek Poplawski
@ 2008-07-23 20:55 ` David Miller
0 siblings, 0 replies; 83+ messages in thread
From: David Miller @ 2008-07-23 20:55 UTC (permalink / raw)
To: jarkao2
Cc: peterz, Larry.Finger, kaber, torvalds, akpm, netdev, linux-kernel,
linux-wireless, mingo
From: Jarek Poplawski <jarkao2@gmail.com>
Date: Wed, 23 Jul 2008 22:43:35 +0200
> On Wed, Jul 23, 2008 at 01:16:07PM -0700, David Miller wrote:
> ...
> > There will always be a need for a "stop all the TX queues" operation.
>
> The question is if the current way is "all correct". As a matter of
> fact I think Peter's doubts could be justified: taking "USHORT_MAX"
> locks looks really dubious (so maybe it's not so strange lockdep
> didn't get used to this).
There are, of course, potentially other ways to achieve the objective.
And for non-multiqueue aware devices (which is the vast majority of
the 400 or so networking drivers we have) there is only one queue and
thus one lock taken.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 20:14 ` David Miller
@ 2008-07-24 7:00 ` Peter Zijlstra
2008-07-25 17:04 ` Ingo Oeser
1 sibling, 0 replies; 83+ messages in thread
From: Peter Zijlstra @ 2008-07-24 7:00 UTC (permalink / raw)
To: David Miller
Cc: jarkao2, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
On Wed, 2008-07-23 at 13:14 -0700, David Miller wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> Date: Wed, 23 Jul 2008 12:58:16 +0200
>
> > So I guess my question is, is netif_tx_lock() here to stay, or is the
> > right fix to convert all those drivers to use __netif_tx_lock() which
> > locks only a single queue?
>
> It's staying.
>
> It's trying to block all potential calls into the ->hard_start_xmit()
> method of the driver, and the only reliable way to do that is to take
> all the TX queue locks. And in one form or another, we're going to
> have this "grab/release all the TX queue locks" construct.
>
> I find it interesting that this cannot be simply described to lockdep
> :-)
If you think its OK to take USHORT_MAX locks at once, I'm afraid we'll
have to agree to disagree :-/
Thing is, lockdep wants to be able to describe the locking hierarchy
with classes, and each class needs to be in static storage for various
reasons.
So if you make a locking hierarchy that is USHORT_MAX deep, you need at
least that amount of static classes.
Also, you'll run into the fact that lockdep will only track like 48 held
locks, after that it self terminates.
I'm aware of only 2 sites in the kernel that break this limit. The
down-side of stretching this limit is that deep lock chains come with
costs (esp so on -rt), so I'm not particularly eager to grow this - it
might give the impresssion its a good idea to have very long lock
chains.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 20:16 ` David Miller
2008-07-23 20:43 ` Jarek Poplawski
@ 2008-07-24 9:10 ` Peter Zijlstra
2008-07-24 9:20 ` David Miller
1 sibling, 1 reply; 83+ messages in thread
From: Peter Zijlstra @ 2008-07-24 9:10 UTC (permalink / raw)
To: David Miller
Cc: jarkao2, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo, Nick Piggin, Paul E McKenney
On Wed, 2008-07-23 at 13:16 -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@gmail.com>
> Date: Wed, 23 Jul 2008 11:49:14 +0000
>
> > On Wed, Jul 23, 2008 at 11:35:19AM +0000, Jarek Poplawski wrote:
> > > On Wed, Jul 23, 2008 at 12:58:16PM +0200, Peter Zijlstra wrote:
> > ...
> > > > When I look at the mac802.11 code in ieee80211_tx_pending() it looks
> > > > like it can do with just one lock at a time, instead of all - but I
> > > > might be missing some obvious details.
> > > >
> > > > So I guess my question is, is netif_tx_lock() here to stay, or is the
> > > > right fix to convert all those drivers to use __netif_tx_lock() which
> > > > locks only a single queue?
> > > >
> > >
> > > It's a new thing mainly for new hardware/drivers, and just after
> > > conversion (older drivers effectively use __netif_tx_lock()), so it'll
> > > probably stay for some time until something better is found. David,
> > > will tell the rest, I hope.
> >
> > ...And, of course, these new drivers should also lock a single queue
> > where possible.
>
> It isn't going away.
>
> There will always be a need for a "stop all the TX queues" operation.
Ok, then how about something like this, the idea is to wrap the per tx
lock with a read lock of the device and let the netif_tx_lock() be the
write side, therefore excluding all device locks, but not incure the
cacheline bouncing on the read side by using per-cpu counters like rcu
does.
This of course requires that netif_tx_lock() is rare, otherwise stuff
will go bounce anyway...
Probably missed a few details,.. but I think the below ought to show the
idea...
struct tx_lock {
int busy;
spinlock_t lock;
unsigned long *counters;
};
int tx_lock_init(struct tx_lock *txl)
{
txl->busy = 0;
spin_lock_init(&txl->lock);
txl->counters = alloc_percpu(unsigned long);
if (!txl->counters)
return -ENOMEM;
return 0;
}
void __netif_tx_lock(struct netdev_queue *txq, cpu)
{
struct net_device *dev = txq->dev;
if (rcu_dereference(dev->tx_lock.busy)) {
spin_lock(&dev->tx_lock.lock);
(*percpu_ptr(dev->tx_lock.counters, cpu))++;
spin_unlock(&dev->tx_lock.lock);
} else
(*percpu_ptr(dev->tx_lock.counters, cpu))++;
spin_lock(&txq->_xmit_lock);
txq->xmit_lock_owner = cpu;
}
void __netif_tx_unlock(struct netdev_queue *txq)
{
struct net_device *dev = txq->dev;
(*percpu_ptr(dev->tx_lock.counters, txq->xmit_lock_owner))--;
txq->xmit_lock_owner = -1;
spin_unlock(&txq->xmit_lock);
}
unsigned long tx_lock_read_counters(struct tx_lock *txl)
{
int i;
unsigned long counter = 0;
/* can use online - the inc/dec are matched per cpu */
for_each_online_cpu(i)
counter += *percpu_ptr(txl->counters, i);
return counter;
}
void netif_tx_lock(struct net_device *dev)
{
spin_lock(&dev->tx_lock.lock);
rcu_assign_pointer(dev->tx_lock.busy, 1);
while (tx_lock_read_counters(&dev->tx_lock)
cpu_relax();
}
void netif_tx_unlock(struct net_device *dev)
{
rcu_assign_pointer(dev->tx_lock.busy, 0);
smp_wmb(); /* because rcu_assign_pointer is broken */
spin_unlock(&dev->tx_lock.lock);
}
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-24 9:10 ` Peter Zijlstra
@ 2008-07-24 9:20 ` David Miller
2008-07-24 9:27 ` Peter Zijlstra
0 siblings, 1 reply; 83+ messages in thread
From: David Miller @ 2008-07-24 9:20 UTC (permalink / raw)
To: peterz
Cc: jarkao2, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo, nickpiggin, paulmck
From: Peter Zijlstra <peterz@infradead.org>
Date: Thu, 24 Jul 2008 11:10:48 +0200
> Ok, then how about something like this, the idea is to wrap the per tx
> lock with a read lock of the device and let the netif_tx_lock() be the
> write side, therefore excluding all device locks, but not incure the
> cacheline bouncing on the read side by using per-cpu counters like rcu
> does.
>
> This of course requires that netif_tx_lock() is rare, otherwise stuff
> will go bounce anyway...
>
> Probably missed a few details,.. but I think the below ought to show the
> idea...
Thanks for the effort, but I don't think we can seriously consider
this.
This lock is taken for every packet transmitted by the system, adding
another memory reference (the RCU deref) and a counter bump is just
not something we can just add to placate lockdep. We going through
all of this effort to seperate the TX locking into individual
queues, it would be silly to go back and make it more expensive.
I have other ideas which I've expanded upon in other emails. They
involve creating a netif_tx_freeze() interface and getting the drivers
to start using it.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-24 9:20 ` David Miller
@ 2008-07-24 9:27 ` Peter Zijlstra
2008-07-24 9:32 ` David Miller
0 siblings, 1 reply; 83+ messages in thread
From: Peter Zijlstra @ 2008-07-24 9:27 UTC (permalink / raw)
To: David Miller
Cc: jarkao2, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo, nickpiggin, paulmck
On Thu, 2008-07-24 at 02:20 -0700, David Miller wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> Date: Thu, 24 Jul 2008 11:10:48 +0200
>
> > Ok, then how about something like this, the idea is to wrap the per tx
> > lock with a read lock of the device and let the netif_tx_lock() be the
> > write side, therefore excluding all device locks, but not incure the
> > cacheline bouncing on the read side by using per-cpu counters like rcu
> > does.
> >
> > This of course requires that netif_tx_lock() is rare, otherwise stuff
> > will go bounce anyway...
> >
> > Probably missed a few details,.. but I think the below ought to show the
> > idea...
>
> Thanks for the effort, but I don't think we can seriously consider
> this.
>
> This lock is taken for every packet transmitted by the system, adding
> another memory reference (the RCU deref) and a counter bump is just
> not something we can just add to placate lockdep. We going through
> all of this effort to seperate the TX locking into individual
> queues, it would be silly to go back and make it more expensive.
Well, not only lockdep, taking a very large number of locks is expensive
as well.
> I have other ideas which I've expanded upon in other emails. They
> involve creating a netif_tx_freeze() interface and getting the drivers
> to start using it.
OK, as long as we get there :-)
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-24 9:27 ` Peter Zijlstra
@ 2008-07-24 9:32 ` David Miller
2008-07-24 10:08 ` Peter Zijlstra
0 siblings, 1 reply; 83+ messages in thread
From: David Miller @ 2008-07-24 9:32 UTC (permalink / raw)
To: peterz
Cc: jarkao2, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo, nickpiggin, paulmck
From: Peter Zijlstra <peterz@infradead.org>
Date: Thu, 24 Jul 2008 11:27:05 +0200
> Well, not only lockdep, taking a very large number of locks is expensive
> as well.
Right now it would be on the order of 16 or 32 for
real hardware.
Much less than the scheduler currently takes on some
of my systems, so currently you are the pot calling the
kettle black. :-)
USHORT_MAX is just the upper hard limit imposed by the
software interface merely as a side effect of storing
the queue number of the SKB as a u16.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-24 9:32 ` David Miller
@ 2008-07-24 10:08 ` Peter Zijlstra
2008-07-24 10:38 ` Nick Piggin
0 siblings, 1 reply; 83+ messages in thread
From: Peter Zijlstra @ 2008-07-24 10:08 UTC (permalink / raw)
To: David Miller
Cc: jarkao2, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo, nickpiggin, paulmck
On Thu, 2008-07-24 at 02:32 -0700, David Miller wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> Date: Thu, 24 Jul 2008 11:27:05 +0200
>
> > Well, not only lockdep, taking a very large number of locks is expensive
> > as well.
>
> Right now it would be on the order of 16 or 32 for
> real hardware.
>
> Much less than the scheduler currently takes on some
> of my systems, so currently you are the pot calling the
> kettle black. :-)
One nit, and then I'll let this issue rest :-)
The scheduler has a long lock dependancy chain (nr_cpu_ids rq locks),
but it never takes all of them at the same time. Any one code path will
at most hold two rq locks.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-24 10:08 ` Peter Zijlstra
@ 2008-07-24 10:38 ` Nick Piggin
2008-07-24 10:55 ` Miklos Szeredi
` (2 more replies)
0 siblings, 3 replies; 83+ messages in thread
From: Nick Piggin @ 2008-07-24 10:38 UTC (permalink / raw)
To: Peter Zijlstra
Cc: David Miller, jarkao2, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo, paulmck
On Thursday 24 July 2008 20:08, Peter Zijlstra wrote:
> On Thu, 2008-07-24 at 02:32 -0700, David Miller wrote:
> > From: Peter Zijlstra <peterz@infradead.org>
> > Date: Thu, 24 Jul 2008 11:27:05 +0200
> >
> > > Well, not only lockdep, taking a very large number of locks is
> > > expensive as well.
> >
> > Right now it would be on the order of 16 or 32 for
> > real hardware.
> >
> > Much less than the scheduler currently takes on some
> > of my systems, so currently you are the pot calling the
> > kettle black. :-)
>
> One nit, and then I'll let this issue rest :-)
>
> The scheduler has a long lock dependancy chain (nr_cpu_ids rq locks),
> but it never takes all of them at the same time. Any one code path will
> at most hold two rq locks.
Aside from lockdep, is there a particular problem with taking 64k locks
at once? (in a very slow path, of course) I don't think it causes a
problem with preempt_count, does it cause issues with -rt kernel?
Hey, something kind of cool (and OT) I've just thought of that we can
do with ticket locks is to take tickets for 2 (or 64K) nested locks,
and then wait for them both (all), so the cost is N*lock + longest spin,
rather than N*lock + N*avg spin.
That would mean even at the worst case of a huge amount of contention
on all 64K locks, it should only take a couple of ms to take all of
them (assuming max spin time isn't ridiculous).
Probably not the kind of feature we want to expose widely, but for
really special things like the scheduler, it might be a neat hack to
save a few cycles ;) Traditional implementations would just have
#define spin_lock_async spin_lock
#define spin_lock_async_wait do {} while (0)
Sorry it's offtopic, but if I didn't post it, I'd forget to. Might be
a fun quick hack for someone.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-24 10:38 ` Nick Piggin
@ 2008-07-24 10:55 ` Miklos Szeredi
2008-07-24 11:06 ` Nick Piggin
2008-07-24 10:59 ` Peter Zijlstra
2008-08-01 21:10 ` Paul E. McKenney
2 siblings, 1 reply; 83+ messages in thread
From: Miklos Szeredi @ 2008-07-24 10:55 UTC (permalink / raw)
To: nickpiggin
Cc: peterz, davem, jarkao2, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo, paulmck
On Thu, 24 Jul 2008, Nick Piggin wrote:
> Hey, something kind of cool (and OT) I've just thought of that we can
> do with ticket locks is to take tickets for 2 (or 64K) nested locks,
> and then wait for them both (all), so the cost is N*lock + longest spin,
> rather than N*lock + N*avg spin.
Isn't this deadlocky?
E.g. one task takes ticket x=1, then other task comes in and takes x=2
and y=1, then first task takes y=2. Then neither can actually
complete both locks.
Miklos
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-24 10:38 ` Nick Piggin
2008-07-24 10:55 ` Miklos Szeredi
@ 2008-07-24 10:59 ` Peter Zijlstra
2008-08-01 21:10 ` Paul E. McKenney
2 siblings, 0 replies; 83+ messages in thread
From: Peter Zijlstra @ 2008-07-24 10:59 UTC (permalink / raw)
To: Nick Piggin
Cc: David Miller, jarkao2, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo, paulmck,
Thomas Gleixner
On Thu, 2008-07-24 at 20:38 +1000, Nick Piggin wrote:
> On Thursday 24 July 2008 20:08, Peter Zijlstra wrote:
> > On Thu, 2008-07-24 at 02:32 -0700, David Miller wrote:
> > > From: Peter Zijlstra <peterz@infradead.org>
> > > Date: Thu, 24 Jul 2008 11:27:05 +0200
> > >
> > > > Well, not only lockdep, taking a very large number of locks is
> > > > expensive as well.
> > >
> > > Right now it would be on the order of 16 or 32 for
> > > real hardware.
> > >
> > > Much less than the scheduler currently takes on some
> > > of my systems, so currently you are the pot calling the
> > > kettle black. :-)
> >
> > One nit, and then I'll let this issue rest :-)
> >
> > The scheduler has a long lock dependancy chain (nr_cpu_ids rq locks),
> > but it never takes all of them at the same time. Any one code path will
> > at most hold two rq locks.
>
> Aside from lockdep, is there a particular problem with taking 64k locks
> at once? (in a very slow path, of course) I don't think it causes a
> problem with preempt_count, does it cause issues with -rt kernel?
PI-chains might explode I guess, Thomas?
Besides that, I just have this voice in my head telling me that
minimizing the number of locks held is a good thing.
> Hey, something kind of cool (and OT) I've just thought of that we can
> do with ticket locks is to take tickets for 2 (or 64K) nested locks,
> and then wait for them both (all), so the cost is N*lock + longest spin,
> rather than N*lock + N*avg spin.
>
> That would mean even at the worst case of a huge amount of contention
> on all 64K locks, it should only take a couple of ms to take all of
> them (assuming max spin time isn't ridiculous).
>
> Probably not the kind of feature we want to expose widely, but for
> really special things like the scheduler, it might be a neat hack to
> save a few cycles ;) Traditional implementations would just have
> #define spin_lock_async spin_lock
> #define spin_lock_async_wait do {} while (0)
>
> Sorry it's offtopic, but if I didn't post it, I'd forget to. Might be
> a fun quick hack for someone.
It might just be worth it for double_rq_lock() - if you can sort out the
deadlock potential Miklos just raised ;-)
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-24 10:55 ` Miklos Szeredi
@ 2008-07-24 11:06 ` Nick Piggin
2008-08-01 21:10 ` Paul E. McKenney
0 siblings, 1 reply; 83+ messages in thread
From: Nick Piggin @ 2008-07-24 11:06 UTC (permalink / raw)
To: Miklos Szeredi
Cc: peterz, davem, jarkao2, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo, paulmck
On Thursday 24 July 2008 20:55, Miklos Szeredi wrote:
> On Thu, 24 Jul 2008, Nick Piggin wrote:
> > Hey, something kind of cool (and OT) I've just thought of that we can
> > do with ticket locks is to take tickets for 2 (or 64K) nested locks,
> > and then wait for them both (all), so the cost is N*lock + longest spin,
> > rather than N*lock + N*avg spin.
>
> Isn't this deadlocky?
>
> E.g. one task takes ticket x=1, then other task comes in and takes x=2
> and y=1, then first task takes y=2. Then neither can actually
> complete both locks.
Oh duh of course you still need mutual exclusion from the first lock
to order the subsequent :P
So yeah it only works for N > 2 locks, and you have to spin_lock the
first one... so unsuitable for scheduler.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-23 20:14 ` David Miller
2008-07-24 7:00 ` Peter Zijlstra
@ 2008-07-25 17:04 ` Ingo Oeser
2008-07-25 18:36 ` Jarek Poplawski
1 sibling, 1 reply; 83+ messages in thread
From: Ingo Oeser @ 2008-07-25 17:04 UTC (permalink / raw)
To: David Miller
Cc: peterz, jarkao2, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
Hi David,
David Miller schrieb:
> From: Peter Zijlstra <peterz@infradead.org>
> Date: Wed, 23 Jul 2008 12:58:16 +0200
>
> > So I guess my question is, is netif_tx_lock() here to stay, or is the
> > right fix to convert all those drivers to use __netif_tx_lock() which
> > locks only a single queue?
>
> It's staying.
>
> It's trying to block all potential calls into the ->hard_start_xmit()
> method of the driver, and the only reliable way to do that is to take
> all the TX queue locks. And in one form or another, we're going to
> have this "grab/release all the TX queue locks" construct.
>
> I find it interesting that this cannot be simply described to lockdep
> :-)
I'm sure as hell, I miss sth. but can't it be done by this pseudo-code:
netif_tx_lock(device)
{
mutex_lock(device->queue_entry_mutex);
foreach_queue_entries(queue, device->queues)
{
spin_lock(queue->tx_lock);
set_noop_tx_handler(queue);
spin_unlock(queue->tx_lock);
}
mutex_unlock(device->queue_entry_mutex);
}
netif_tx_unlock(device)
{
mutex_lock(device->queue_entry_mutex);
foreach_queue_entries(queue, device->queues)
{
spin_lock(queue->tx_lock);
set_useful_tx_handler(queue);
spin_unlock(queue->tx_lock);
}
mutex_unlock(device->queue_entry_mutex);
}
Then protect use of the queues by queue->tx_lock in transmit path.
The first setup of the queue doesn't need to be protected, since no-one
knows the device. The final cleanup of the device doesn't need to be
protected either, because netif_tx_lock() and netif_tx_unlock() should
not be called after entering the final cleanup.
Some VM locking works this way...
Best Regards
Ingo Oeser
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-25 17:04 ` Ingo Oeser
@ 2008-07-25 18:36 ` Jarek Poplawski
2008-07-25 19:16 ` Johannes Berg
0 siblings, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-25 18:36 UTC (permalink / raw)
To: Ingo Oeser
Cc: David Miller, peterz, Larry.Finger, kaber, torvalds, akpm, netdev,
linux-kernel, linux-wireless, mingo
On Fri, Jul 25, 2008 at 07:04:36PM +0200, Ingo Oeser wrote:
...
> I'm sure as hell, I miss sth. but can't it be done by this pseudo-code:
...And I really doubt it can't be done like this.
Jarek P.
>
> netif_tx_lock(device)
> {
> mutex_lock(device->queue_entry_mutex);
> foreach_queue_entries(queue, device->queues)
> {
> spin_lock(queue->tx_lock);
> set_noop_tx_handler(queue);
> spin_unlock(queue->tx_lock);
> }
> mutex_unlock(device->queue_entry_mutex);
> }
>
> netif_tx_unlock(device)
> {
> mutex_lock(device->queue_entry_mutex);
> foreach_queue_entries(queue, device->queues)
> {
> spin_lock(queue->tx_lock);
> set_useful_tx_handler(queue);
> spin_unlock(queue->tx_lock);
> }
> mutex_unlock(device->queue_entry_mutex);
> }
>
> Then protect use of the queues by queue->tx_lock in transmit path.
> The first setup of the queue doesn't need to be protected, since no-one
> knows the device. The final cleanup of the device doesn't need to be
> protected either, because netif_tx_lock() and netif_tx_unlock() should
> not be called after entering the final cleanup.
>
> Some VM locking works this way...
>
>
> Best Regards
>
> Ingo Oeser
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-25 18:36 ` Jarek Poplawski
@ 2008-07-25 19:16 ` Johannes Berg
2008-07-25 19:34 ` Jarek Poplawski
0 siblings, 1 reply; 83+ messages in thread
From: Johannes Berg @ 2008-07-25 19:16 UTC (permalink / raw)
To: Jarek Poplawski
Cc: Ingo Oeser, David Miller, peterz, Larry.Finger, kaber, torvalds,
akpm, netdev, linux-kernel, linux-wireless, mingo
[-- Attachment #1: Type: text/plain, Size: 651 bytes --]
On Fri, 2008-07-25 at 20:36 +0200, Jarek Poplawski wrote:
> On Fri, Jul 25, 2008 at 07:04:36PM +0200, Ingo Oeser wrote:
> ...
> > I'm sure as hell, I miss sth. but can't it be done by this pseudo-code:
>
> ...And I really doubt it can't be done like this.
Umm, of course it cannot, because then we'd have to take the mutex in
the TX path, which we cannot. We cannot have another lock in the TX
path, what's so hard to understand about? We need to be able to lock all
queues to lock out multiple tx paths at once in some (really) slow paths
but not have any extra lock overhead for the tx path, especially not a
single lock.
johannes
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-25 19:16 ` Johannes Berg
@ 2008-07-25 19:34 ` Jarek Poplawski
2008-07-25 19:36 ` Johannes Berg
0 siblings, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-25 19:34 UTC (permalink / raw)
To: Johannes Berg
Cc: Ingo Oeser, David Miller, peterz, Larry.Finger, kaber, torvalds,
akpm, netdev, linux-kernel, linux-wireless, mingo
On Fri, Jul 25, 2008 at 09:16:24PM +0200, Johannes Berg wrote:
> On Fri, 2008-07-25 at 20:36 +0200, Jarek Poplawski wrote:
> > On Fri, Jul 25, 2008 at 07:04:36PM +0200, Ingo Oeser wrote:
> > ...
> > > I'm sure as hell, I miss sth. but can't it be done by this pseudo-code:
> >
> > ...And I really doubt it can't be done like this.
>
> Umm, of course it cannot, because then we'd have to take the mutex in
> the TX path, which we cannot. We cannot have another lock in the TX
> path, what's so hard to understand about? We need to be able to lock all
> queues to lock out multiple tx paths at once in some (really) slow paths
> but not have any extra lock overhead for the tx path, especially not a
> single lock.
But this mutex doesn't have to be mutex. And it's not for the tx path,
only for "service" just like netif_tx_lock(). The fast path needs only
queue->tx_lock.
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-25 19:34 ` Jarek Poplawski
@ 2008-07-25 19:36 ` Johannes Berg
2008-07-25 20:01 ` Jarek Poplawski
0 siblings, 1 reply; 83+ messages in thread
From: Johannes Berg @ 2008-07-25 19:36 UTC (permalink / raw)
To: Jarek Poplawski
Cc: Ingo Oeser, David Miller, peterz, Larry.Finger, kaber, torvalds,
akpm, netdev, linux-kernel, linux-wireless, mingo
[-- Attachment #1: Type: text/plain, Size: 707 bytes --]
On Fri, 2008-07-25 at 21:34 +0200, Jarek Poplawski wrote:
> > Umm, of course it cannot, because then we'd have to take the mutex in
> > the TX path, which we cannot. We cannot have another lock in the TX
> > path, what's so hard to understand about? We need to be able to lock all
> > queues to lock out multiple tx paths at once in some (really) slow paths
> > but not have any extra lock overhead for the tx path, especially not a
> > single lock.
>
> But this mutex doesn't have to be mutex. And it's not for the tx path,
> only for "service" just like netif_tx_lock(). The fast path needs only
> queue->tx_lock.
No, we need to be able to lock out multiple TX paths at once.
johannes
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-25 19:36 ` Johannes Berg
@ 2008-07-25 20:01 ` Jarek Poplawski
2008-07-26 9:18 ` David Miller
0 siblings, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-25 20:01 UTC (permalink / raw)
To: Johannes Berg
Cc: Ingo Oeser, David Miller, peterz, Larry.Finger, kaber, torvalds,
akpm, netdev, linux-kernel, linux-wireless, mingo
On Fri, Jul 25, 2008 at 09:36:15PM +0200, Johannes Berg wrote:
> On Fri, 2008-07-25 at 21:34 +0200, Jarek Poplawski wrote:
>
> > > Umm, of course it cannot, because then we'd have to take the mutex in
> > > the TX path, which we cannot. We cannot have another lock in the TX
> > > path, what's so hard to understand about? We need to be able to lock all
> > > queues to lock out multiple tx paths at once in some (really) slow paths
> > > but not have any extra lock overhead for the tx path, especially not a
> > > single lock.
> >
> > But this mutex doesn't have to be mutex. And it's not for the tx path,
> > only for "service" just like netif_tx_lock(). The fast path needs only
> > queue->tx_lock.
>
> No, we need to be able to lock out multiple TX paths at once.
IMHO, it can do the same. We could e.g. insert a locked spinlock into
this noop_tx_handler(), to give everyone some waiting.
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-25 20:01 ` Jarek Poplawski
@ 2008-07-26 9:18 ` David Miller
2008-07-26 10:53 ` Jarek Poplawski
2008-07-26 13:18 ` Jarek Poplawski
0 siblings, 2 replies; 83+ messages in thread
From: David Miller @ 2008-07-26 9:18 UTC (permalink / raw)
To: jarkao2
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
From: Jarek Poplawski <jarkao2@gmail.com>
Date: Fri, 25 Jul 2008 22:01:37 +0200
> On Fri, Jul 25, 2008 at 09:36:15PM +0200, Johannes Berg wrote:
> > On Fri, 2008-07-25 at 21:34 +0200, Jarek Poplawski wrote:
> >
> > No, we need to be able to lock out multiple TX paths at once.
>
> IMHO, it can do the same. We could e.g. insert a locked spinlock into
> this noop_tx_handler(), to give everyone some waiting.
I think there might be an easier way, but we may have
to modify the state bits a little.
Every call into ->hard_start_xmit() is made like this:
1. lock TX queue
2. check TX queue stopped
3. call ->hard_start_xmit() if not stopped
This means that we can in fact do something like:
unsigned int i;
for (i = 0; i < dev->num_tx_queues; i++) {
struct netdev_queue *txq;
txq = netdev_get_tx_queue(dev, i);
spin_lock_bh(&txq->_xmit_lock);
netif_tx_freeze_queue(txq);
spin_unlock_bh(&txq->_xmit_lock);
}
netif_tx_freeze_queue() just sets a new bit we add.
Then we go to the ->hard_start_xmit() call sites and check this new
"frozen" bit as well as the existing "stopped" bit.
When we unfreeze each queue later, we see if it is stopped, and if not
we schedule it's qdisc for packet processing.
A patch below shows how the guarding would work. It doesn't implement
the actual freeze/unfreeze.
We need to use a side-state bit to do this because we don't
want this operation to get all mixed up with the queue waking
operations that the driver TX reclaim code will be doing
asynchronously.
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index b4d056c..cba98fb 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -440,6 +440,7 @@ static inline void napi_synchronize(const struct napi_struct *n)
enum netdev_queue_state_t
{
__QUEUE_STATE_XOFF,
+ __QUEUE_STATE_FROZEN,
};
struct netdev_queue {
@@ -1099,6 +1100,11 @@ static inline int netif_queue_stopped(const struct net_device *dev)
return netif_tx_queue_stopped(netdev_get_tx_queue(dev, 0));
}
+static inline int netif_tx_queue_frozen(const struct netdev_queue *dev_queue)
+{
+ return test_bit(__QUEUE_STATE_FROZEN, &dev_queue->state);
+}
+
/**
* netif_running - test if up
* @dev: network device
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index c127208..6c7af39 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -70,6 +70,7 @@ static void queue_process(struct work_struct *work)
local_irq_save(flags);
__netif_tx_lock(txq, smp_processor_id());
if (netif_tx_queue_stopped(txq) ||
+ netif_tx_queue_frozen(txq) ||
dev->hard_start_xmit(skb, dev) != NETDEV_TX_OK) {
skb_queue_head(&npinfo->txq, skb);
__netif_tx_unlock(txq);
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index c7d484f..3284605 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -3305,6 +3305,7 @@ static __inline__ void pktgen_xmit(struct pktgen_dev *pkt_dev)
txq = netdev_get_tx_queue(odev, queue_map);
if (netif_tx_queue_stopped(txq) ||
+ netif_tx_queue_frozen(txq) ||
need_resched()) {
idle_start = getCurUs();
@@ -3320,7 +3321,8 @@ static __inline__ void pktgen_xmit(struct pktgen_dev *pkt_dev)
pkt_dev->idle_acc += getCurUs() - idle_start;
- if (netif_tx_queue_stopped(txq)) {
+ if (netif_tx_queue_stopped(txq) ||
+ netif_tx_queue_frozen(txq)) {
pkt_dev->next_tx_us = getCurUs(); /* TODO */
pkt_dev->next_tx_ns = 0;
goto out; /* Try the next interface */
@@ -3352,7 +3354,8 @@ static __inline__ void pktgen_xmit(struct pktgen_dev *pkt_dev)
txq = netdev_get_tx_queue(odev, queue_map);
__netif_tx_lock_bh(txq);
- if (!netif_tx_queue_stopped(txq)) {
+ if (!netif_tx_queue_stopped(txq) &&
+ !netif_tx_queue_frozen(txq)) {
atomic_inc(&(pkt_dev->skb->users));
retry_now:
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index fd2a6ca..f17551a 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -135,7 +135,8 @@ static inline int qdisc_restart(struct Qdisc *q)
txq = netdev_get_tx_queue(dev, skb_get_queue_mapping(skb));
HARD_TX_LOCK(dev, txq, smp_processor_id());
- if (!netif_subqueue_stopped(dev, skb))
+ if (!netif_tx_queue_stopped(txq) &&
+ !netif_tx_queue_frozen(txq))
ret = dev_hard_start_xmit(skb, dev, txq);
HARD_TX_UNLOCK(dev, txq);
@@ -162,7 +163,8 @@ static inline int qdisc_restart(struct Qdisc *q)
break;
}
- if (ret && netif_tx_queue_stopped(txq))
+ if (ret && (netif_tx_queue_stopped(txq) ||
+ netif_tx_queue_frozen(txq)))
ret = 0;
return ret;
^ permalink raw reply related [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-26 9:18 ` David Miller
@ 2008-07-26 10:53 ` Jarek Poplawski
2008-07-26 13:18 ` Jarek Poplawski
1 sibling, 0 replies; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-26 10:53 UTC (permalink / raw)
To: David Miller
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
On Sat, Jul 26, 2008 at 02:18:46AM -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@gmail.com>
> Date: Fri, 25 Jul 2008 22:01:37 +0200
>
> > On Fri, Jul 25, 2008 at 09:36:15PM +0200, Johannes Berg wrote:
> > > On Fri, 2008-07-25 at 21:34 +0200, Jarek Poplawski wrote:
> > >
> > > No, we need to be able to lock out multiple TX paths at once.
> >
> > IMHO, it can do the same. We could e.g. insert a locked spinlock into
> > this noop_tx_handler(), to give everyone some waiting.
>
> I think there might be an easier way, but we may have
> to modify the state bits a little.
Yes, this looks definitely easier, but here is this one little bit
more, plus additional code to handle this in various places. Ingo's
proposal needs a (one?!) bit more thinking in one place, but it
shouldn't add even a bit to tx path (and it looks really cool!). Of
course, it could be re-considered in some other time too.
BTW, it seems with "Ingo's method" this netif_queue_stopped() check
could be removed too - the change of handlers could be done with
single qdiscs as well.
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-26 9:18 ` David Miller
2008-07-26 10:53 ` Jarek Poplawski
@ 2008-07-26 13:18 ` Jarek Poplawski
2008-07-27 0:34 ` David Miller
1 sibling, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-26 13:18 UTC (permalink / raw)
To: David Miller
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
On Sat, Jul 26, 2008 at 02:18:46AM -0700, David Miller wrote:
...
> I think there might be an easier way, but we may have
> to modify the state bits a little.
>
> Every call into ->hard_start_xmit() is made like this:
>
> 1. lock TX queue
> 2. check TX queue stopped
> 3. call ->hard_start_xmit() if not stopped
>
> This means that we can in fact do something like:
>
> unsigned int i;
>
> for (i = 0; i < dev->num_tx_queues; i++) {
> struct netdev_queue *txq;
>
> txq = netdev_get_tx_queue(dev, i);
> spin_lock_bh(&txq->_xmit_lock);
> netif_tx_freeze_queue(txq);
> spin_unlock_bh(&txq->_xmit_lock);
> }
>
> netif_tx_freeze_queue() just sets a new bit we add.
>
> Then we go to the ->hard_start_xmit() call sites and check this new
> "frozen" bit as well as the existing "stopped" bit.
>
> When we unfreeze each queue later, we see if it is stopped, and if not
> we schedule it's qdisc for packet processing.
I guess some additional synchronization will be added yet to prevent
parallel freeze and especially unfreeze.
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-26 13:18 ` Jarek Poplawski
@ 2008-07-27 0:34 ` David Miller
2008-07-27 20:37 ` Jarek Poplawski
0 siblings, 1 reply; 83+ messages in thread
From: David Miller @ 2008-07-27 0:34 UTC (permalink / raw)
To: jarkao2
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
From: Jarek Poplawski <jarkao2@gmail.com>
Date: Sat, 26 Jul 2008 15:18:38 +0200
> I guess some additional synchronization will be added yet to prevent
> parallel freeze and especially unfreeze.
Yes, that could be a problem. Using test_and_set_bit() can
guard the freezing sequence itself, but it won't handle
letting two threads of control freeze and unfreeze safely
without a reference count.
We want this thing to be able to be used flexbly, which means
we can't just assume that this is a short code sequence and
the unfreeze will come quickly. That pretty much rules
out using a new lock around the operation or anything
like that.
So I guess we could replace the state bit with a reference
count. It doesn't even need to be atomic since it is set
and tested under dev_queue->_xmit_lock
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-27 0:34 ` David Miller
@ 2008-07-27 20:37 ` Jarek Poplawski
2008-07-31 12:29 ` David Miller
0 siblings, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-07-27 20:37 UTC (permalink / raw)
To: David Miller
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
On Sat, Jul 26, 2008 at 05:34:34PM -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@gmail.com>
> Date: Sat, 26 Jul 2008 15:18:38 +0200
>
> > I guess some additional synchronization will be added yet to prevent
> > parallel freeze and especially unfreeze.
>
> Yes, that could be a problem. Using test_and_set_bit() can
> guard the freezing sequence itself, but it won't handle
> letting two threads of control freeze and unfreeze safely
> without a reference count.
>
> We want this thing to be able to be used flexbly, which means
> we can't just assume that this is a short code sequence and
> the unfreeze will come quickly. That pretty much rules
> out using a new lock around the operation or anything
> like that.
>
> So I guess we could replace the state bit with a reference
> count. It doesn't even need to be atomic since it is set
> and tested under dev_queue->_xmit_lock
Looks like enough to me. (Probably it could even share space with
the state.)
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-27 20:37 ` Jarek Poplawski
@ 2008-07-31 12:29 ` David Miller
2008-07-31 12:38 ` Nick Piggin
` (2 more replies)
0 siblings, 3 replies; 83+ messages in thread
From: David Miller @ 2008-07-31 12:29 UTC (permalink / raw)
To: jarkao2
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
From: Jarek Poplawski <jarkao2@gmail.com>
Date: Sun, 27 Jul 2008 22:37:57 +0200
> Looks like enough to me. (Probably it could even share space with
> the state.)
So I made some progress on this, three things:
1) I remember why I choose a to use a bit in my design, it's so that
it does not increase the costs of the checks in the fast paths.
test_bit(X) && test_bit(Y) can be combined into a single test by
the compiler.
2) We can't use the reference counting scheme, because we don't want
to let a second cpu into these protected code paths just because
another is in the middle of using a freeze too.
3) So we can simply put a top-level TX spinlock around these things.
Therefore all the hot paths:
a) grab _xmit_lock
b) check XOFF and FROZEN
c) only call ->hard_start_xmit() if both bits are clear
netif_tx_lock() does:
1) grab netdev->tx_global_lock
2) for_each_tx_queue() {
lock(txq);
set_bit(FROZEN);
unlock(txq);
}
and unlock does:
1) for_each_tx_queue() {
clear_bit(FROZEN);
if (!test_bit(XOFF))
__netif_schedule();
}
2) release netdev->tx_global_lock
And this seems to satisfy all the constraints which are:
1) Must act like a lock and protect execution of the code path
which occurs inside of "netif_tx_{lock,unlock}()"
2) Must ensure no cpus are executing inside of ->hard_start_xmit()
after netif_tx_lock() returns.
3) Must not try to grab all the TX queue locks at once.
This top-level tx_global_lock also simplifies the freezing, as
it makes sure only one cpu is initiating or finishing a freeze
at any given time.
I've also adjusted code that really and truly only wanted to
lock one queue at a time, which in particular was IFB and the
teql scheduler.
It's late here, but I'll start testing the following patch on my
multiqueue capable cards after some sleep.
diff --git a/drivers/net/ifb.c b/drivers/net/ifb.c
index 0960e69..e4fbefc 100644
--- a/drivers/net/ifb.c
+++ b/drivers/net/ifb.c
@@ -69,18 +69,20 @@ static void ri_tasklet(unsigned long dev)
struct net_device *_dev = (struct net_device *)dev;
struct ifb_private *dp = netdev_priv(_dev);
struct net_device_stats *stats = &_dev->stats;
+ struct netdev_queue *txq;
struct sk_buff *skb;
+ txq = netdev_get_tx_queue(_dev, 0);
dp->st_task_enter++;
if ((skb = skb_peek(&dp->tq)) == NULL) {
dp->st_txq_refl_try++;
- if (netif_tx_trylock(_dev)) {
+ if (__netif_tx_trylock(txq)) {
dp->st_rxq_enter++;
while ((skb = skb_dequeue(&dp->rq)) != NULL) {
skb_queue_tail(&dp->tq, skb);
dp->st_rx2tx_tran++;
}
- netif_tx_unlock(_dev);
+ __netif_tx_unlock(txq);
} else {
/* reschedule */
dp->st_rxq_notenter++;
@@ -115,7 +117,7 @@ static void ri_tasklet(unsigned long dev)
BUG();
}
- if (netif_tx_trylock(_dev)) {
+ if (__netif_tx_trylock(txq)) {
dp->st_rxq_check++;
if ((skb = skb_peek(&dp->rq)) == NULL) {
dp->tasklet_pending = 0;
@@ -123,10 +125,10 @@ static void ri_tasklet(unsigned long dev)
netif_wake_queue(_dev);
} else {
dp->st_rxq_rsch++;
- netif_tx_unlock(_dev);
+ __netif_tx_unlock(txq);
goto resched;
}
- netif_tx_unlock(_dev);
+ __netif_tx_unlock(txq);
} else {
resched:
dp->tasklet_pending = 1;
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index b4d056c..ee583f6 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -440,6 +440,7 @@ static inline void napi_synchronize(const struct napi_struct *n)
enum netdev_queue_state_t
{
__QUEUE_STATE_XOFF,
+ __QUEUE_STATE_FROZEN,
};
struct netdev_queue {
@@ -636,7 +637,7 @@ struct net_device
unsigned int real_num_tx_queues;
unsigned long tx_queue_len; /* Max frames per queue allowed */
-
+ spinlock_t tx_global_lock;
/*
* One part is mostly used on xmit path (device)
*/
@@ -1099,6 +1100,11 @@ static inline int netif_queue_stopped(const struct net_device *dev)
return netif_tx_queue_stopped(netdev_get_tx_queue(dev, 0));
}
+static inline int netif_tx_queue_frozen(const struct netdev_queue *dev_queue)
+{
+ return test_bit(__QUEUE_STATE_FROZEN, &dev_queue->state);
+}
+
/**
* netif_running - test if up
* @dev: network device
@@ -1475,6 +1481,26 @@ static inline void __netif_tx_lock_bh(struct netdev_queue *txq)
txq->xmit_lock_owner = smp_processor_id();
}
+static inline int __netif_tx_trylock(struct netdev_queue *txq)
+{
+ int ok = spin_trylock(&txq->_xmit_lock);
+ if (likely(ok))
+ txq->xmit_lock_owner = smp_processor_id();
+ return ok;
+}
+
+static inline void __netif_tx_unlock(struct netdev_queue *txq)
+{
+ txq->xmit_lock_owner = -1;
+ spin_unlock(&txq->_xmit_lock);
+}
+
+static inline void __netif_tx_unlock_bh(struct netdev_queue *txq)
+{
+ txq->xmit_lock_owner = -1;
+ spin_unlock_bh(&txq->_xmit_lock);
+}
+
/**
* netif_tx_lock - grab network device transmit lock
* @dev: network device
@@ -1484,12 +1510,23 @@ static inline void __netif_tx_lock_bh(struct netdev_queue *txq)
*/
static inline void netif_tx_lock(struct net_device *dev)
{
- int cpu = smp_processor_id();
unsigned int i;
+ int cpu;
+ spin_lock(&dev->tx_global_lock);
+ cpu = smp_processor_id();
for (i = 0; i < dev->num_tx_queues; i++) {
struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
+
+ /* We are the only thread of execution doing a
+ * freeze, but we have to grab the _xmit_lock in
+ * order to synchronize with threads which are in
+ * the ->hard_start_xmit() handler and already
+ * checked the frozen bit.
+ */
__netif_tx_lock(txq, cpu);
+ set_bit(__QUEUE_STATE_FROZEN, &txq->state);
+ __netif_tx_unlock(txq);
}
}
@@ -1499,40 +1536,22 @@ static inline void netif_tx_lock_bh(struct net_device *dev)
netif_tx_lock(dev);
}
-static inline int __netif_tx_trylock(struct netdev_queue *txq)
-{
- int ok = spin_trylock(&txq->_xmit_lock);
- if (likely(ok))
- txq->xmit_lock_owner = smp_processor_id();
- return ok;
-}
-
-static inline int netif_tx_trylock(struct net_device *dev)
-{
- return __netif_tx_trylock(netdev_get_tx_queue(dev, 0));
-}
-
-static inline void __netif_tx_unlock(struct netdev_queue *txq)
-{
- txq->xmit_lock_owner = -1;
- spin_unlock(&txq->_xmit_lock);
-}
-
-static inline void __netif_tx_unlock_bh(struct netdev_queue *txq)
-{
- txq->xmit_lock_owner = -1;
- spin_unlock_bh(&txq->_xmit_lock);
-}
-
static inline void netif_tx_unlock(struct net_device *dev)
{
unsigned int i;
for (i = 0; i < dev->num_tx_queues; i++) {
struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
- __netif_tx_unlock(txq);
- }
+ /* No need to grab the _xmit_lock here. If the
+ * queue is not stopped for another reason, we
+ * force a schedule.
+ */
+ clear_bit(__QUEUE_STATE_FROZEN, &txq->state);
+ if (!test_bit(__QUEUE_STATE_XOFF, &txq->state))
+ __netif_schedule(txq->qdisc);
+ }
+ spin_unlock(&dev->tx_global_lock);
}
static inline void netif_tx_unlock_bh(struct net_device *dev)
@@ -1556,13 +1575,18 @@ static inline void netif_tx_unlock_bh(struct net_device *dev)
static inline void netif_tx_disable(struct net_device *dev)
{
unsigned int i;
+ int cpu;
- netif_tx_lock_bh(dev);
+ local_bh_disable();
+ cpu = smp_processor_id();
for (i = 0; i < dev->num_tx_queues; i++) {
struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
+
+ __netif_tx_lock(txq, cpu);
netif_tx_stop_queue(txq);
+ __netif_tx_unlock(txq);
}
- netif_tx_unlock_bh(dev);
+ local_bh_enable();
}
static inline void netif_addr_lock(struct net_device *dev)
diff --git a/net/core/dev.c b/net/core/dev.c
index 63d6bcd..69320a5 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4200,6 +4200,7 @@ static void netdev_init_queues(struct net_device *dev)
{
netdev_init_one_queue(dev, &dev->rx_queue, NULL);
netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL);
+ spin_lock_init(&dev->tx_global_lock);
}
/**
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index c127208..6c7af39 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -70,6 +70,7 @@ static void queue_process(struct work_struct *work)
local_irq_save(flags);
__netif_tx_lock(txq, smp_processor_id());
if (netif_tx_queue_stopped(txq) ||
+ netif_tx_queue_frozen(txq) ||
dev->hard_start_xmit(skb, dev) != NETDEV_TX_OK) {
skb_queue_head(&npinfo->txq, skb);
__netif_tx_unlock(txq);
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index c7d484f..3284605 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -3305,6 +3305,7 @@ static __inline__ void pktgen_xmit(struct pktgen_dev *pkt_dev)
txq = netdev_get_tx_queue(odev, queue_map);
if (netif_tx_queue_stopped(txq) ||
+ netif_tx_queue_frozen(txq) ||
need_resched()) {
idle_start = getCurUs();
@@ -3320,7 +3321,8 @@ static __inline__ void pktgen_xmit(struct pktgen_dev *pkt_dev)
pkt_dev->idle_acc += getCurUs() - idle_start;
- if (netif_tx_queue_stopped(txq)) {
+ if (netif_tx_queue_stopped(txq) ||
+ netif_tx_queue_frozen(txq)) {
pkt_dev->next_tx_us = getCurUs(); /* TODO */
pkt_dev->next_tx_ns = 0;
goto out; /* Try the next interface */
@@ -3352,7 +3354,8 @@ static __inline__ void pktgen_xmit(struct pktgen_dev *pkt_dev)
txq = netdev_get_tx_queue(odev, queue_map);
__netif_tx_lock_bh(txq);
- if (!netif_tx_queue_stopped(txq)) {
+ if (!netif_tx_queue_stopped(txq) &&
+ !netif_tx_queue_frozen(txq)) {
atomic_inc(&(pkt_dev->skb->users));
retry_now:
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 345838a..9c9cd4d 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -135,7 +135,8 @@ static inline int qdisc_restart(struct Qdisc *q)
txq = netdev_get_tx_queue(dev, skb_get_queue_mapping(skb));
HARD_TX_LOCK(dev, txq, smp_processor_id());
- if (!netif_subqueue_stopped(dev, skb))
+ if (!netif_tx_queue_stopped(txq) &&
+ !netif_tx_queue_frozen(txq))
ret = dev_hard_start_xmit(skb, dev, txq);
HARD_TX_UNLOCK(dev, txq);
@@ -162,7 +163,8 @@ static inline int qdisc_restart(struct Qdisc *q)
break;
}
- if (ret && netif_tx_queue_stopped(txq))
+ if (ret && (netif_tx_queue_stopped(txq) ||
+ netif_tx_queue_frozen(txq)))
ret = 0;
return ret;
diff --git a/net/sched/sch_teql.c b/net/sched/sch_teql.c
index 5372236..2c35c67 100644
--- a/net/sched/sch_teql.c
+++ b/net/sched/sch_teql.c
@@ -305,10 +305,11 @@ restart:
switch (teql_resolve(skb, skb_res, slave)) {
case 0:
- if (netif_tx_trylock(slave)) {
- if (!__netif_subqueue_stopped(slave, subq) &&
+ if (__netif_tx_trylock(slave_txq)) {
+ if (!netif_tx_queue_stopped(slave_txq) &&
+ !netif_tx_queue_frozen(slave_txq) &&
slave->hard_start_xmit(skb, slave) == 0) {
- netif_tx_unlock(slave);
+ __netif_tx_unlock(slave_txq);
master->slaves = NEXT_SLAVE(q);
netif_wake_queue(dev);
master->stats.tx_packets++;
@@ -316,7 +317,7 @@ restart:
qdisc_pkt_len(skb);
return 0;
}
- netif_tx_unlock(slave);
+ __netif_tx_unlock(slave_txq);
}
if (netif_queue_stopped(dev))
busy = 1;
^ permalink raw reply related [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-31 12:29 ` David Miller
@ 2008-07-31 12:38 ` Nick Piggin
2008-07-31 12:44 ` David Miller
2008-08-01 4:27 ` David Miller
2008-08-01 6:48 ` Jarek Poplawski
2 siblings, 1 reply; 83+ messages in thread
From: Nick Piggin @ 2008-07-31 12:38 UTC (permalink / raw)
To: David Miller
Cc: jarkao2, johannes, netdev, peterz, Larry.Finger, kaber, torvalds,
akpm, netdev, linux-kernel, linux-wireless, mingo
On Thursday 31 July 2008 22:29, David Miller wrote:
> From: Jarek Poplawski <jarkao2@gmail.com>
> Date: Sun, 27 Jul 2008 22:37:57 +0200
>
> > Looks like enough to me. (Probably it could even share space with
> > the state.)
>
> So I made some progress on this, three things:
>
> 1) I remember why I choose a to use a bit in my design, it's so that
> it does not increase the costs of the checks in the fast paths.
> test_bit(X) && test_bit(Y) can be combined into a single test by
> the compiler.
Except for the braindead volatile that gets stuck on the bitops pointer.
Last time I complained about this, a lot of noise was made and I think
Linus wanted it to stay around so we could pass volatile pointers to
bitops & co without warnings. I say we should just remove the volatile
and kill any callers that might warn...
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-31 12:38 ` Nick Piggin
@ 2008-07-31 12:44 ` David Miller
0 siblings, 0 replies; 83+ messages in thread
From: David Miller @ 2008-07-31 12:44 UTC (permalink / raw)
To: nickpiggin
Cc: jarkao2, johannes, netdev, peterz, Larry.Finger, kaber, torvalds,
akpm, netdev, linux-kernel, linux-wireless, mingo
From: Nick Piggin <nickpiggin@yahoo.com.au>
Date: Thu, 31 Jul 2008 22:38:19 +1000
> Except for the braindead volatile that gets stuck on the bitops pointer.
>
> Last time I complained about this, a lot of noise was made and I think
> Linus wanted it to stay around so we could pass volatile pointers to
> bitops & co without warnings. I say we should just remove the volatile
> and kill any callers that might warn...
Ho hum... :)
Another way to approach that, and keep the volatile, is to have
a "test_flags()" interface that takes the bit mask of values
you want to test for cases where you know it is a single word
flags value.
The downside is that this kind of interface is easy to use
incorrectly especially when accesses to the same flags use
bot test_bit() and test_flags().
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-31 12:29 ` David Miller
2008-07-31 12:38 ` Nick Piggin
@ 2008-08-01 4:27 ` David Miller
2008-08-01 7:09 ` Peter Zijlstra
2008-08-01 6:48 ` Jarek Poplawski
2 siblings, 1 reply; 83+ messages in thread
From: David Miller @ 2008-08-01 4:27 UTC (permalink / raw)
To: jarkao2
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
From: David Miller <davem@davemloft.net>
Date: Thu, 31 Jul 2008 05:29:32 -0700 (PDT)
> It's late here, but I'll start testing the following patch on my
> multiqueue capable cards after some sleep.
As a quick followup, I tested this on a machine where I had
a multiqueue interface and could reproduce the lockdep warnings,
and the patch makes them go away.
So I've pushed the patch into net-2.6 and will send it to Linus.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-31 12:29 ` David Miller
2008-07-31 12:38 ` Nick Piggin
2008-08-01 4:27 ` David Miller
@ 2008-08-01 6:48 ` Jarek Poplawski
2008-08-01 7:00 ` David Miller
2008-08-01 7:01 ` Jarek Poplawski
2 siblings, 2 replies; 83+ messages in thread
From: Jarek Poplawski @ 2008-08-01 6:48 UTC (permalink / raw)
To: David Miller
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
On Thu, Jul 31, 2008 at 05:29:32AM -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@gmail.com>
> Date: Sun, 27 Jul 2008 22:37:57 +0200
>
> > Looks like enough to me. (Probably it could even share space with
> > the state.)
Alas I've some doubts here...
...
> static inline void netif_tx_unlock(struct net_device *dev)
> {
> unsigned int i;
>
> for (i = 0; i < dev->num_tx_queues; i++) {
> struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
> - __netif_tx_unlock(txq);
> - }
>
> + /* No need to grab the _xmit_lock here. If the
> + * queue is not stopped for another reason, we
> + * force a schedule.
> + */
> + clear_bit(__QUEUE_STATE_FROZEN, &txq->state);
The comments in asm-x86/bitops.h to set_bit/clear_bit are rather queer
about reordering on non x86: isn't eg. smp_mb_before_clear_bit()
useful here?
> + if (!test_bit(__QUEUE_STATE_XOFF, &txq->state))
> + __netif_schedule(txq->qdisc);
> + }
> + spin_unlock(&dev->tx_global_lock);
> }
...
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 63d6bcd..69320a5 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -4200,6 +4200,7 @@ static void netdev_init_queues(struct net_device *dev)
> {
> netdev_init_one_queue(dev, &dev->rx_queue, NULL);
> netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL);
> + spin_lock_init(&dev->tx_global_lock);
This will probably need some lockdep annotations similar to
_xmit_lock.
> diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
> index 345838a..9c9cd4d 100644
> --- a/net/sched/sch_generic.c
> +++ b/net/sched/sch_generic.c
> @@ -135,7 +135,8 @@ static inline int qdisc_restart(struct Qdisc *q)
> txq = netdev_get_tx_queue(dev, skb_get_queue_mapping(skb));
>
> HARD_TX_LOCK(dev, txq, smp_processor_id());
> - if (!netif_subqueue_stopped(dev, skb))
> + if (!netif_tx_queue_stopped(txq) &&
> + !netif_tx_queue_frozen(txq))
> ret = dev_hard_start_xmit(skb, dev, txq);
> HARD_TX_UNLOCK(dev, txq);
This thing is the most doubtful to me: before this patch callers would
wait on this lock. Now they take the lock without problems, check the
flags, and let to take this lock again, doing some re-queing in the
meantime.
So, it seems HARD_TX_LOCK should rather do some busy looping now with
a trylock, and re-checking the _FROZEN flag. Maybe even this should
be done in __netif_tx_lock(). On the other hand, this shouldn't block
too much the owner of tx_global_lock() with taking such a lock.
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-08-01 6:48 ` Jarek Poplawski
@ 2008-08-01 7:00 ` David Miller
2008-08-01 7:01 ` Jarek Poplawski
1 sibling, 0 replies; 83+ messages in thread
From: David Miller @ 2008-08-01 7:00 UTC (permalink / raw)
To: jarkao2
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
From: Jarek Poplawski <jarkao2@gmail.com>
Date: Fri, 1 Aug 2008 06:48:10 +0000
> On Thu, Jul 31, 2008 at 05:29:32AM -0700, David Miller wrote:
> > + /* No need to grab the _xmit_lock here. If the
> > + * queue is not stopped for another reason, we
> > + * force a schedule.
> > + */
> > + clear_bit(__QUEUE_STATE_FROZEN, &txq->state);
>
> The comments in asm-x86/bitops.h to set_bit/clear_bit are rather queer
> about reordering on non x86: isn't eg. smp_mb_before_clear_bit()
> useful here?
It doesn't matter, we need no synchronization here at all.
We unconditionally perform a __netif_schedule(), and that
will run the TX queue on the local cpu. We will take the
_xmit_lock at least once time if in fact the queue was not
stopped before the first froze it.
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 63d6bcd..69320a5 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -4200,6 +4200,7 @@ static void netdev_init_queues(struct net_device *dev)
> > {
> > netdev_init_one_queue(dev, &dev->rx_queue, NULL);
> > netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL);
> > + spin_lock_init(&dev->tx_global_lock);
>
> This will probably need some lockdep annotations similar to
> _xmit_lock.
I highly doubt it. It will never be taken nested with another
device's instance.
It is only ->hard_start_xmit() leading to another ->hard_start_xmit()
where this can currently happen, but tx_global_lock will not be used
in such paths.
> > @@ -135,7 +135,8 @@ static inline int qdisc_restart(struct Qdisc *q)
> > txq = netdev_get_tx_queue(dev, skb_get_queue_mapping(skb));
> >
> > HARD_TX_LOCK(dev, txq, smp_processor_id());
> > - if (!netif_subqueue_stopped(dev, skb))
> > + if (!netif_tx_queue_stopped(txq) &&
> > + !netif_tx_queue_frozen(txq))
> > ret = dev_hard_start_xmit(skb, dev, txq);
> > HARD_TX_UNLOCK(dev, txq);
>
> This thing is the most doubtful to me: before this patch callers would
> wait on this lock. Now they take the lock without problems, check the
> flags, and let to take this lock again, doing some re-queing in the
> meantime.
>
> So, it seems HARD_TX_LOCK should rather do some busy looping now with
> a trylock, and re-checking the _FROZEN flag. Maybe even this should
> be done in __netif_tx_lock(). On the other hand, this shouldn't block
> too much the owner of tx_global_lock() with taking such a lock.
'ret' will be NETDEV_TX_BUSY in such a case (finding the queue
frozen), which will cause the while() loop in __qdisc_run() to
terminate.
The freezer will unconditionally schedule a new __qdisc_run()
when it unfreezes the queue.
Sure it's possible for some cpus to bang in and out of there
a few times, but that's completely harmless. And it can only
happen a few times since this freeze state is only held across
a critical section.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-08-01 7:01 ` Jarek Poplawski
@ 2008-08-01 7:01 ` David Miller
2008-08-01 7:41 ` Jarek Poplawski
0 siblings, 1 reply; 83+ messages in thread
From: David Miller @ 2008-08-01 7:01 UTC (permalink / raw)
To: jarkao2
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
From: Jarek Poplawski <jarkao2@gmail.com>
Date: Fri, 1 Aug 2008 07:01:50 +0000
> On Fri, Aug 01, 2008 at 06:48:10AM +0000, Jarek Poplawski wrote:
> > On Thu, Jul 31, 2008 at 05:29:32AM -0700, David Miller wrote:
> ...
> > > diff --git a/net/core/dev.c b/net/core/dev.c
> > > index 63d6bcd..69320a5 100644
> > > --- a/net/core/dev.c
> > > +++ b/net/core/dev.c
> > > @@ -4200,6 +4200,7 @@ static void netdev_init_queues(struct net_device *dev)
> > > {
> > > netdev_init_one_queue(dev, &dev->rx_queue, NULL);
> > > netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL);
> > > + spin_lock_init(&dev->tx_global_lock);
> >
> > This will probably need some lockdep annotations similar to
> > _xmit_lock.
>
> ...BTW, we probably could also consider some optimization here: the
> xmit_lock of the first queue could be treated as special, and only
> the owner could do such a freezing. This would save changes of
> functionality to non mq devices. On the other hand, it would need
> remembering about this special treatment (so, eg. a separate lockdep
> initialization than all the others).
I think special casing the zero's queue's lock is a bad idea.
Having a real top-level synchronizer is a powerful tool and
we could use it for other things.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-08-01 6:48 ` Jarek Poplawski
2008-08-01 7:00 ` David Miller
@ 2008-08-01 7:01 ` Jarek Poplawski
2008-08-01 7:01 ` David Miller
1 sibling, 1 reply; 83+ messages in thread
From: Jarek Poplawski @ 2008-08-01 7:01 UTC (permalink / raw)
To: David Miller
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
On Fri, Aug 01, 2008 at 06:48:10AM +0000, Jarek Poplawski wrote:
> On Thu, Jul 31, 2008 at 05:29:32AM -0700, David Miller wrote:
...
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index 63d6bcd..69320a5 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -4200,6 +4200,7 @@ static void netdev_init_queues(struct net_device *dev)
> > {
> > netdev_init_one_queue(dev, &dev->rx_queue, NULL);
> > netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL);
> > + spin_lock_init(&dev->tx_global_lock);
>
> This will probably need some lockdep annotations similar to
> _xmit_lock.
...BTW, we probably could also consider some optimization here: the
xmit_lock of the first queue could be treated as special, and only
the owner could do such a freezing. This would save changes of
functionality to non mq devices. On the other hand, it would need
remembering about this special treatment (so, eg. a separate lockdep
initialization than all the others).
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-08-01 4:27 ` David Miller
@ 2008-08-01 7:09 ` Peter Zijlstra
0 siblings, 0 replies; 83+ messages in thread
From: Peter Zijlstra @ 2008-08-01 7:09 UTC (permalink / raw)
To: David Miller
Cc: jarkao2, johannes, netdev, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
On Thu, 2008-07-31 at 21:27 -0700, David Miller wrote:
> From: David Miller <davem@davemloft.net>
> Date: Thu, 31 Jul 2008 05:29:32 -0700 (PDT)
>
> > It's late here, but I'll start testing the following patch on my
> > multiqueue capable cards after some sleep.
>
> As a quick followup, I tested this on a machine where I had
> a multiqueue interface and could reproduce the lockdep warnings,
> and the patch makes them go away.
>
> So I've pushed the patch into net-2.6 and will send it to Linus.
Thanks david!
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-08-01 7:01 ` David Miller
@ 2008-08-01 7:41 ` Jarek Poplawski
0 siblings, 0 replies; 83+ messages in thread
From: Jarek Poplawski @ 2008-08-01 7:41 UTC (permalink / raw)
To: David Miller
Cc: johannes, netdev, peterz, Larry.Finger, kaber, torvalds, akpm,
netdev, linux-kernel, linux-wireless, mingo
On Fri, Aug 01, 2008 at 12:01:46AM -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@gmail.com>
> Date: Fri, 1 Aug 2008 07:01:50 +0000
>
> > On Fri, Aug 01, 2008 at 06:48:10AM +0000, Jarek Poplawski wrote:
> > > On Thu, Jul 31, 2008 at 05:29:32AM -0700, David Miller wrote:
> > ...
> > > > diff --git a/net/core/dev.c b/net/core/dev.c
> > > > index 63d6bcd..69320a5 100644
> > > > --- a/net/core/dev.c
> > > > +++ b/net/core/dev.c
> > > > @@ -4200,6 +4200,7 @@ static void netdev_init_queues(struct net_device *dev)
> > > > {
> > > > netdev_init_one_queue(dev, &dev->rx_queue, NULL);
> > > > netdev_for_each_tx_queue(dev, netdev_init_one_queue, NULL);
> > > > + spin_lock_init(&dev->tx_global_lock);
> > >
> > > This will probably need some lockdep annotations similar to
> > > _xmit_lock.
> >
> > ...BTW, we probably could also consider some optimization here: the
> > xmit_lock of the first queue could be treated as special, and only
> > the owner could do such a freezing. This would save changes of
> > functionality to non mq devices. On the other hand, it would need
> > remembering about this special treatment (so, eg. a separate lockdep
> > initialization than all the others).
>
> I think special casing the zero's queue's lock is a bad idea.
> Having a real top-level synchronizer is a powerful tool and
> we could use it for other things.
Sure, if there is really no problem with lockdep here, there is no
need for this at all.
Thanks for the explanations,
Jarek P.
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-24 10:38 ` Nick Piggin
2008-07-24 10:55 ` Miklos Szeredi
2008-07-24 10:59 ` Peter Zijlstra
@ 2008-08-01 21:10 ` Paul E. McKenney
2 siblings, 0 replies; 83+ messages in thread
From: Paul E. McKenney @ 2008-08-01 21:10 UTC (permalink / raw)
To: Nick Piggin
Cc: Peter Zijlstra, David Miller, jarkao2, Larry.Finger, kaber,
torvalds, akpm, netdev, linux-kernel, linux-wireless, mingo
On Thu, Jul 24, 2008 at 08:38:35PM +1000, Nick Piggin wrote:
> On Thursday 24 July 2008 20:08, Peter Zijlstra wrote:
> > On Thu, 2008-07-24 at 02:32 -0700, David Miller wrote:
> > > From: Peter Zijlstra <peterz@infradead.org>
> > > Date: Thu, 24 Jul 2008 11:27:05 +0200
> > >
> > > > Well, not only lockdep, taking a very large number of locks is
> > > > expensive as well.
> > >
> > > Right now it would be on the order of 16 or 32 for
> > > real hardware.
> > >
> > > Much less than the scheduler currently takes on some
> > > of my systems, so currently you are the pot calling the
> > > kettle black. :-)
> >
> > One nit, and then I'll let this issue rest :-)
> >
> > The scheduler has a long lock dependancy chain (nr_cpu_ids rq locks),
> > but it never takes all of them at the same time. Any one code path will
> > at most hold two rq locks.
>
> Aside from lockdep, is there a particular problem with taking 64k locks
> at once? (in a very slow path, of course) I don't think it causes a
> problem with preempt_count, does it cause issues with -rt kernel?
>
> Hey, something kind of cool (and OT) I've just thought of that we can
> do with ticket locks is to take tickets for 2 (or 64K) nested locks,
> and then wait for them both (all), so the cost is N*lock + longest spin,
> rather than N*lock + N*avg spin.
>
> That would mean even at the worst case of a huge amount of contention
> on all 64K locks, it should only take a couple of ms to take all of
> them (assuming max spin time isn't ridiculous).
>
> Probably not the kind of feature we want to expose widely, but for
> really special things like the scheduler, it might be a neat hack to
> save a few cycles ;) Traditional implementations would just have
> #define spin_lock_async spin_lock
> #define spin_lock_async_wait do {} while (0)
>
> Sorry it's offtopic, but if I didn't post it, I'd forget to. Might be
> a fun quick hack for someone.
FWIW, I did something similar in a previous life for the write-side of
a brlock-like locking mechanism. This was especially helpful if the
read-side critical sections were long.
Thanx, Paul
^ permalink raw reply [flat|nested] 83+ messages in thread
* Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98()
2008-07-24 11:06 ` Nick Piggin
@ 2008-08-01 21:10 ` Paul E. McKenney
0 siblings, 0 replies; 83+ messages in thread
From: Paul E. McKenney @ 2008-08-01 21:10 UTC (permalink / raw)
To: Nick Piggin
Cc: Miklos Szeredi, peterz, davem, jarkao2, Larry.Finger, kaber,
torvalds, akpm, netdev, linux-kernel, linux-wireless, mingo
On Thu, Jul 24, 2008 at 09:06:51PM +1000, Nick Piggin wrote:
> On Thursday 24 July 2008 20:55, Miklos Szeredi wrote:
> > On Thu, 24 Jul 2008, Nick Piggin wrote:
> > > Hey, something kind of cool (and OT) I've just thought of that we can
> > > do with ticket locks is to take tickets for 2 (or 64K) nested locks,
> > > and then wait for them both (all), so the cost is N*lock + longest spin,
> > > rather than N*lock + N*avg spin.
> >
> > Isn't this deadlocky?
> >
> > E.g. one task takes ticket x=1, then other task comes in and takes x=2
> > and y=1, then first task takes y=2. Then neither can actually
> > complete both locks.
>
> Oh duh of course you still need mutual exclusion from the first lock
> to order the subsequent :P
>
> So yeah it only works for N > 2 locks, and you have to spin_lock the
> first one... so unsuitable for scheduler.
Or sort the locks by address or some such.
Thanx, Paul
^ permalink raw reply [flat|nested] 83+ messages in thread
end of thread, other threads:[~2008-08-01 21:10 UTC | newest]
Thread overview: 83+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20080721133059.GA30637@elte.hu>
[not found] ` <20080721134506.GA27598@elte.hu>
[not found] ` <20080721143023.GA32451@elte.hu>
2008-07-21 15:10 ` [crash] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370 David Miller
[not found] ` <20080721150446.GA17746@elte.hu>
2008-07-21 15:24 ` David Miller
2008-07-21 18:18 ` Ian Schram
2008-07-21 19:06 ` Ingo Molnar
2008-07-21 19:13 ` Larry Finger
2008-07-21 19:34 ` Ingo Molnar
2008-07-21 19:43 ` Larry Finger
2008-07-21 19:47 ` Linus Torvalds
2008-07-21 20:15 ` David Miller
2008-07-21 20:28 ` Larry Finger
2008-07-21 20:21 ` David Miller
2008-07-21 20:38 ` Larry Finger
2008-07-21 20:46 ` David Miller
2008-07-21 20:51 ` Patrick McHardy
2008-07-21 21:01 ` David Miller
2008-07-21 21:06 ` Patrick McHardy
2008-07-21 21:35 ` Patrick McHardy
2008-07-21 21:42 ` Patrick McHardy
2008-07-21 21:51 ` Larry Finger
2008-07-21 22:04 ` Patrick McHardy
2008-07-21 22:40 ` Larry Finger
2008-07-21 23:15 ` David Miller
2008-07-22 6:34 ` Larry Finger
2008-07-22 10:51 ` Jarek Poplawski
2008-07-22 11:32 ` David Miller
2008-07-22 12:52 ` Larry Finger
2008-07-22 20:43 ` David Miller
2008-07-22 13:02 ` Larry Finger
2008-07-22 14:53 ` Patrick McHardy
2008-07-22 21:17 ` David Miller
2008-07-22 16:39 ` Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98() Larry Finger
2008-07-22 17:20 ` Patrick McHardy
2008-07-22 18:39 ` Larry Finger
2008-07-22 18:44 ` Patrick McHardy
2008-07-22 19:30 ` Larry Finger
2008-07-22 23:04 ` David Miller
2008-07-23 6:20 ` Jarek Poplawski
2008-07-23 7:59 ` David Miller
2008-07-23 8:54 ` Jarek Poplawski
2008-07-23 9:03 ` Peter Zijlstra
2008-07-23 9:35 ` Jarek Poplawski
2008-07-23 9:50 ` Peter Zijlstra
2008-07-23 10:13 ` Jarek Poplawski
2008-07-23 10:58 ` Peter Zijlstra
2008-07-23 11:35 ` Jarek Poplawski
2008-07-23 11:49 ` Jarek Poplawski
2008-07-23 20:16 ` David Miller
2008-07-23 20:43 ` Jarek Poplawski
2008-07-23 20:55 ` David Miller
2008-07-24 9:10 ` Peter Zijlstra
2008-07-24 9:20 ` David Miller
2008-07-24 9:27 ` Peter Zijlstra
2008-07-24 9:32 ` David Miller
2008-07-24 10:08 ` Peter Zijlstra
2008-07-24 10:38 ` Nick Piggin
2008-07-24 10:55 ` Miklos Szeredi
2008-07-24 11:06 ` Nick Piggin
2008-08-01 21:10 ` Paul E. McKenney
2008-07-24 10:59 ` Peter Zijlstra
2008-08-01 21:10 ` Paul E. McKenney
2008-07-23 20:14 ` David Miller
2008-07-24 7:00 ` Peter Zijlstra
2008-07-25 17:04 ` Ingo Oeser
2008-07-25 18:36 ` Jarek Poplawski
2008-07-25 19:16 ` Johannes Berg
2008-07-25 19:34 ` Jarek Poplawski
2008-07-25 19:36 ` Johannes Berg
2008-07-25 20:01 ` Jarek Poplawski
2008-07-26 9:18 ` David Miller
2008-07-26 10:53 ` Jarek Poplawski
2008-07-26 13:18 ` Jarek Poplawski
2008-07-27 0:34 ` David Miller
2008-07-27 20:37 ` Jarek Poplawski
2008-07-31 12:29 ` David Miller
2008-07-31 12:38 ` Nick Piggin
2008-07-31 12:44 ` David Miller
2008-08-01 4:27 ` David Miller
2008-08-01 7:09 ` Peter Zijlstra
2008-08-01 6:48 ` Jarek Poplawski
2008-08-01 7:00 ` David Miller
2008-08-01 7:01 ` Jarek Poplawski
2008-08-01 7:01 ` David Miller
2008-08-01 7:41 ` Jarek Poplawski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).