* [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware error
@ 2026-04-05 5:41 Cole Leavitt
2026-04-05 5:41 ` [PATCH 1/3] wifi: iwlwifi: prevent NAPI processing after " Cole Leavitt
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Cole Leavitt @ 2026-04-05 5:41 UTC (permalink / raw)
To: linux-wireless; +Cc: greearb, miriam.rachel.korenblit, johannes, cole
Three fixes for the iwlmld sub-driver addressing use-after-free,
TSO segmentation explosion, and soft lockups during firmware error
recovery on Intel BE200 (WiFi7).
1/3 closes the NAPI race window where stale RX data from dying
firmware reaches the TX completion handlers, causing corrupt SSN
values to trigger skb use-after-free in iwl_trans_reclaim().
Ben Greear confirmed the WARN_ONCE fires on his test systems.
2/3 fixes the TSO segmentation explosion when AMSDU is disabled
for a TID. The TLC notification sets max_tid_amsdu_len to the
sentinel value 1, which slips past the existing zero check and
produces num_subframes=0, causing skb_gso_segment() to create
32000+ tiny segments. Revised per Miriam Korenblit's feedback to
check for the sentinel value directly and add a WARN_ON_ONCE
guard after the division as defense-in-depth.
3/3 adds STATUS_FW_ERROR checks in the TX pull path to stop
feeding frames to dead firmware. Revised per Johannes Berg's
feedback to use status bit checks instead of stop_queues/wake_queues,
which doesn't interact well with TXQ-based APIs.
Changes since v1:
- 1/3: Added Tested-by from Ben Greear
- 2/3: Check max_tid_amsdu_len == 1 (sentinel) instead of
guarding !num_subframes after division; added WARN_ON_ONCE
defense-in-depth (Suggested-by: Miriam Korenblit)
- 3/3: Replaced ieee80211_stop_queues()/wake_queues() with
STATUS_FW_ERROR checks in TX pull path (per Johannes Berg)
Cole Leavitt (3):
wifi: iwlwifi: prevent NAPI processing after firmware error
wifi: iwlwifi: mld: fix TSO segmentation explosion when AMSDU is
disabled
wifi: iwlwifi: mld: skip TX when firmware is dead
.../net/wireless/intel/iwlwifi/mld/mac80211.c | 4 +
drivers/net/wireless/intel/iwlwifi/mld/tx.c | 210 +++++------
.../wireless/intel/iwlwifi/pcie/gen1_2/rx.c | 337 +++++++++---------
3 files changed, 284 insertions(+), 267 deletions(-)
base-commit: 3aae9383f42f687221c011d7ee87529398e826b3
--
2.52.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/3] wifi: iwlwifi: prevent NAPI processing after firmware error
2026-04-05 5:41 [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware error Cole Leavitt
@ 2026-04-05 5:41 ` Cole Leavitt
2026-04-05 5:41 ` [PATCH 2/3] wifi: iwlwifi: mld: fix TSO segmentation explosion when AMSDU is disabled Cole Leavitt
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Cole Leavitt @ 2026-04-05 5:41 UTC (permalink / raw)
To: linux-wireless; +Cc: greearb, miriam.rachel.korenblit, johannes, cole, stable
After a firmware error is detected and STATUS_FW_ERROR is set, NAPI can
still be actively polling or get scheduled from a prior interrupt. The
NAPI poll functions (both legacy and MSIX variants) have no check for
STATUS_FW_ERROR and will continue processing stale RX ring entries from
dying firmware. This can dispatch TX completion notifications containing
corrupt SSN values to iwl_mld_handle_tx_resp_notif(), which passes them
to iwl_trans_reclaim(). If the corrupt SSN causes reclaim to walk TX
queue entries that were already freed by a prior correct reclaim, the
result is an skb use-after-free or double-free.
The race window opens when the MSIX IRQ handler schedules NAPI (lines
2319-2321 in rx.c) before processing the error bit (lines 2382-2396),
or when NAPI is already running on another CPU from a previous interrupt
when STATUS_FW_ERROR gets set on the current CPU.
Add STATUS_FW_ERROR checks to both NAPI poll functions to prevent
processing stale RX data after firmware error, and add early-return
guards in the TX response and compressed BA notification handlers as
defense-in-depth. Each check uses WARN_ONCE to log if the race is
actually hit, which aids diagnosis of the hard-to-reproduce skb
use-after-free reported on Intel BE200.
Note that _iwl_trans_pcie_gen2_stop_device() already calls
iwl_pcie_rx_napi_sync() to quiesce NAPI during device teardown, but that
runs much later in the restart sequence. These checks close the window
between error detection and device stop.
Fixes: d1e879ec600f ("wifi: iwlwifi: add iwlmld sub-driver")
Cc: stable@vger.kernel.org
Tested-by: Ben Greear <greearb@candelatech.com>
Signed-off-by: Cole Leavitt <cole@unwrap.rs>
---
drivers/net/wireless/intel/iwlwifi/mld/tx.c | 202 +++++------
.../wireless/intel/iwlwifi/pcie/gen1_2/rx.c | 337 +++++++++---------
2 files changed, 273 insertions(+), 266 deletions(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/mld/tx.c b/drivers/net/wireless/intel/iwlwifi/mld/tx.c
index 546d09a38dab..e341d12e5233 100644
--- a/drivers/net/wireless/intel/iwlwifi/mld/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/mld/tx.c
@@ -44,8 +44,8 @@ void iwl_mld_toggle_tx_ant(struct iwl_mld *mld, u8 *ant)
*ant = iwl_mld_next_ant(iwl_mld_get_valid_tx_ant(mld), *ant);
}
-static int
-iwl_mld_get_queue_size(struct iwl_mld *mld, struct ieee80211_txq *txq)
+static int iwl_mld_get_queue_size(struct iwl_mld *mld,
+ struct ieee80211_txq *txq)
{
struct ieee80211_sta *sta = txq->sta;
struct ieee80211_link_sta *link_sta;
@@ -74,9 +74,10 @@ static int iwl_mld_allocate_txq(struct iwl_mld *mld, struct ieee80211_txq *txq)
/* We can't know when the station is asleep or awake, so we
* must disable the queue hang detection.
*/
- unsigned int watchdog_timeout = txq->vif->type == NL80211_IFTYPE_AP ?
- IWL_WATCHDOG_DISABLED :
- mld->trans->mac_cfg->base->wd_timeout;
+ unsigned int watchdog_timeout =
+ txq->vif->type == NL80211_IFTYPE_AP ?
+ IWL_WATCHDOG_DISABLED :
+ mld->trans->mac_cfg->base->wd_timeout;
int queue, size;
lockdep_assert_wiphy(mld->wiphy);
@@ -91,9 +92,9 @@ static int iwl_mld_allocate_txq(struct iwl_mld *mld, struct ieee80211_txq *txq)
watchdog_timeout);
if (queue >= 0)
- IWL_DEBUG_TX_QUEUES(mld,
- "Enabling TXQ #%d for sta mask 0x%x tid %d\n",
- queue, fw_sta_mask, tid);
+ IWL_DEBUG_TX_QUEUES(
+ mld, "Enabling TXQ #%d for sta mask 0x%x tid %d\n",
+ queue, fw_sta_mask, tid);
return queue;
}
@@ -123,9 +124,8 @@ void iwl_mld_add_txq_list(struct iwl_mld *mld)
while (!list_empty(&mld->txqs_to_add)) {
struct ieee80211_txq *txq;
- struct iwl_mld_txq *mld_txq =
- list_first_entry(&mld->txqs_to_add, struct iwl_mld_txq,
- list);
+ struct iwl_mld_txq *mld_txq = list_first_entry(
+ &mld->txqs_to_add, struct iwl_mld_txq, list);
int failed;
txq = container_of((void *)mld_txq, struct ieee80211_txq,
@@ -149,8 +149,7 @@ void iwl_mld_add_txq_list(struct iwl_mld *mld)
void iwl_mld_add_txqs_wk(struct wiphy *wiphy, struct wiphy_work *wk)
{
- struct iwl_mld *mld = container_of(wk, struct iwl_mld,
- add_txqs_wk);
+ struct iwl_mld *mld = container_of(wk, struct iwl_mld, add_txqs_wk);
/* will reschedule to run after restart */
if (mld->fw_status.in_hw_restart)
@@ -159,8 +158,8 @@ void iwl_mld_add_txqs_wk(struct wiphy *wiphy, struct wiphy_work *wk)
iwl_mld_add_txq_list(mld);
}
-void
-iwl_mld_free_txq(struct iwl_mld *mld, u32 fw_sta_mask, u32 tid, u32 queue_id)
+void iwl_mld_free_txq(struct iwl_mld *mld, u32 fw_sta_mask, u32 tid,
+ u32 queue_id)
{
struct iwl_scd_queue_cfg_cmd remove_cmd = {
.operation = cpu_to_le32(IWL_SCD_QUEUE_REMOVE),
@@ -193,8 +192,7 @@ void iwl_mld_remove_txq(struct iwl_mld *mld, struct ieee80211_txq *txq)
sta_msk = iwl_mld_fw_sta_id_mask(mld, txq->sta);
- tid = txq->tid == IEEE80211_NUM_TIDS ? IWL_MGMT_TID :
- txq->tid;
+ tid = txq->tid == IEEE80211_NUM_TIDS ? IWL_MGMT_TID : txq->tid;
iwl_mld_free_txq(mld, sta_msk, tid, mld_txq->fw_id);
@@ -202,11 +200,9 @@ void iwl_mld_remove_txq(struct iwl_mld *mld, struct ieee80211_txq *txq)
mld_txq->status.allocated = false;
}
-#define OPT_HDR(type, skb, off) \
- (type *)(skb_network_header(skb) + (off))
+#define OPT_HDR(type, skb, off) (type *)(skb_network_header(skb) + (off))
-static __le32
-iwl_mld_get_offload_assist(struct sk_buff *skb, bool amsdu)
+static __le32 iwl_mld_get_offload_assist(struct sk_buff *skb, bool amsdu)
{
struct ieee80211_hdr *hdr = (void *)skb->data;
u16 mh_len = ieee80211_hdrlen(hdr->frame_control);
@@ -225,7 +221,7 @@ iwl_mld_get_offload_assist(struct sk_buff *skb, bool amsdu)
* the devices we support has this flags?
*/
if (WARN_ONCE(skb->protocol != htons(ETH_P_IP) &&
- skb->protocol != htons(ETH_P_IPV6),
+ skb->protocol != htons(ETH_P_IPV6),
"No support for requested checksum\n")) {
skb_checksum_help(skb);
goto out;
@@ -306,8 +302,8 @@ static void iwl_mld_get_basic_rates_and_band(struct iwl_mld *mld,
unsigned long *basic_rates,
u8 *band)
{
- u32 link_id = u32_get_bits(info->control.flags,
- IEEE80211_TX_CTRL_MLO_LINK);
+ u32 link_id =
+ u32_get_bits(info->control.flags, IEEE80211_TX_CTRL_MLO_LINK);
*basic_rates = vif->bss_conf.basic_rates;
*band = info->band;
@@ -333,8 +329,7 @@ static void iwl_mld_get_basic_rates_and_band(struct iwl_mld *mld,
}
}
-u8 iwl_mld_get_lowest_rate(struct iwl_mld *mld,
- struct ieee80211_tx_info *info,
+u8 iwl_mld_get_lowest_rate(struct iwl_mld *mld, struct ieee80211_tx_info *info,
struct ieee80211_vif *vif)
{
struct ieee80211_supported_band *sband;
@@ -389,8 +384,8 @@ static u32 iwl_mld_mac80211_rate_idx_to_fw(struct iwl_mld *mld,
/* if the rate isn't a well known legacy rate, take the lowest one */
if (rate_idx < 0 || rate_idx >= IWL_RATE_COUNT_LEGACY)
- rate_idx = iwl_mld_get_lowest_rate(mld, info,
- info->control.vif);
+ rate_idx =
+ iwl_mld_get_lowest_rate(mld, info, info->control.vif);
WARN_ON_ONCE(rate_idx < 0);
@@ -404,7 +399,8 @@ static u32 iwl_mld_mac80211_rate_idx_to_fw(struct iwl_mld *mld,
* 0 - 3 for CCK and 0 - 7 for OFDM
*/
rate_plcp = (rate_idx >= IWL_FIRST_OFDM_RATE ?
- rate_idx - IWL_FIRST_OFDM_RATE : rate_idx);
+ rate_idx - IWL_FIRST_OFDM_RATE :
+ rate_idx);
return (u32)rate_plcp | rate_flags;
}
@@ -424,8 +420,7 @@ static u32 iwl_mld_get_tx_ant(struct iwl_mld *mld,
static u32 iwl_mld_get_inject_tx_rate(struct iwl_mld *mld,
struct ieee80211_tx_info *info,
- struct ieee80211_sta *sta,
- __le16 fc)
+ struct ieee80211_sta *sta, __le16 fc)
{
struct ieee80211_tx_rate *rate = &info->control.rates[0];
u32 result;
@@ -492,9 +487,8 @@ static __le32 iwl_mld_get_tx_rate_n_flags(struct iwl_mld *mld,
return iwl_v3_rate_to_v2_v3(rate, mld->fw_rates_ver_3);
}
-static void
-iwl_mld_fill_tx_cmd_hdr(struct iwl_tx_cmd *tx_cmd,
- struct sk_buff *skb, bool amsdu)
+static void iwl_mld_fill_tx_cmd_hdr(struct iwl_tx_cmd *tx_cmd,
+ struct sk_buff *skb, bool amsdu)
{
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct ieee80211_hdr *hdr = (void *)skb->data;
@@ -530,10 +524,9 @@ iwl_mld_fill_tx_cmd_hdr(struct iwl_tx_cmd *tx_cmd,
}
}
-static void
-iwl_mld_fill_tx_cmd(struct iwl_mld *mld, struct sk_buff *skb,
- struct iwl_device_tx_cmd *dev_tx_cmd,
- struct ieee80211_sta *sta)
+static void iwl_mld_fill_tx_cmd(struct iwl_mld *mld, struct sk_buff *skb,
+ struct iwl_device_tx_cmd *dev_tx_cmd,
+ struct ieee80211_sta *sta)
{
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct ieee80211_hdr *hdr = (void *)skb->data;
@@ -561,8 +554,7 @@ iwl_mld_fill_tx_cmd(struct iwl_mld *mld, struct sk_buff *skb,
rate_n_flags = iwl_mld_get_tx_rate_n_flags(mld, info, sta,
hdr->frame_control);
} else if (!ieee80211_is_data(hdr->frame_control) ||
- (mld_sta &&
- mld_sta->sta_state < IEEE80211_STA_AUTHORIZED)) {
+ (mld_sta && mld_sta->sta_state < IEEE80211_STA_AUTHORIZED)) {
/* These are important frames */
flags |= IWL_TX_FLAGS_HIGH_PRI;
}
@@ -587,8 +579,8 @@ iwl_mld_get_link_from_tx_info(struct ieee80211_tx_info *info)
{
struct iwl_mld_vif *mld_vif =
iwl_mld_vif_from_mac80211(info->control.vif);
- u32 link_id = u32_get_bits(info->control.flags,
- IEEE80211_TX_CTRL_MLO_LINK);
+ u32 link_id =
+ u32_get_bits(info->control.flags, IEEE80211_TX_CTRL_MLO_LINK);
if (link_id == IEEE80211_LINK_UNSPECIFIED) {
if (info->control.vif->active_links)
@@ -600,9 +592,9 @@ iwl_mld_get_link_from_tx_info(struct ieee80211_tx_info *info)
return rcu_dereference(mld_vif->link[link_id]);
}
-static int
-iwl_mld_get_tx_queue_id(struct iwl_mld *mld, struct ieee80211_txq *txq,
- struct sk_buff *skb)
+static int iwl_mld_get_tx_queue_id(struct iwl_mld *mld,
+ struct ieee80211_txq *txq,
+ struct sk_buff *skb)
{
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct ieee80211_hdr *hdr = (void *)skb->data;
@@ -686,8 +678,7 @@ iwl_mld_get_tx_queue_id(struct iwl_mld *mld, struct ieee80211_txq *txq,
return IWL_MLD_INVALID_QUEUE;
}
-static void iwl_mld_probe_resp_set_noa(struct iwl_mld *mld,
- struct sk_buff *skb)
+static void iwl_mld_probe_resp_set_noa(struct iwl_mld *mld, struct sk_buff *skb)
{
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct iwl_mld_link *mld_link =
@@ -709,8 +700,7 @@ static void iwl_mld_probe_resp_set_noa(struct iwl_mld *mld,
if (skb_tailroom(skb) < resp_data->noa_len) {
if (pskb_expand_head(skb, 0, resp_data->noa_len, GFP_ATOMIC)) {
- IWL_ERR(mld,
- "Failed to reallocate probe resp\n");
+ IWL_ERR(mld, "Failed to reallocate probe resp\n");
goto out;
}
}
@@ -770,8 +760,7 @@ static int iwl_mld_tx_mpdu(struct iwl_mld *mld, struct sk_buff *skb,
tid = IWL_TID_NON_QOS;
}
- IWL_DEBUG_TX(mld, "TX TID:%d from Q:%d len %d\n",
- tid, queue, skb->len);
+ IWL_DEBUG_TX(mld, "TX TID:%d from Q:%d len %d\n", tid, queue, skb->len);
/* From now on, we cannot access info->control */
memset(&info->status, 0, sizeof(info->status));
@@ -824,7 +813,7 @@ static int iwl_mld_tx_tso_segment(struct iwl_mld *mld, struct sk_buff *skb,
*/
if (skb->protocol == htons(ETH_P_IPV6) &&
((struct ipv6hdr *)skb_network_header(skb))->nexthdr !=
- IPPROTO_TCP) {
+ IPPROTO_TCP) {
netdev_flags &= ~NETIF_F_CSUM_MASK;
return iwl_tx_tso_segment(skb, 1, netdev_flags, mpdus_skbs);
}
@@ -851,7 +840,7 @@ static int iwl_mld_tx_tso_segment(struct iwl_mld *mld, struct sk_buff *skb,
num_subframes = sta->max_amsdu_subframes;
tcp_payload_len = skb_tail_pointer(skb) - skb_transport_header(skb) -
- tcp_hdrlen(skb) + skb->data_len;
+ tcp_hdrlen(skb) + skb->data_len;
/* Make sure we have enough TBs for the A-MSDU:
* 2 for each subframe
@@ -893,7 +882,7 @@ static int iwl_mld_tx_tso(struct iwl_mld *mld, struct sk_buff *skb,
return -1;
payload_len = skb_tail_pointer(skb) - skb_transport_header(skb) -
- tcp_hdrlen(skb) + skb->data_len;
+ tcp_hdrlen(skb) + skb->data_len;
if (payload_len <= skb_shinfo(skb)->gso_size)
return iwl_mld_tx_mpdu(mld, skb, txq);
@@ -1011,8 +1000,8 @@ static void iwl_mld_hwrate_to_tx_rate(struct iwl_mld *mld,
{
enum nl80211_band band = info->band;
struct ieee80211_tx_rate *tx_rate = &info->status.rates[0];
- u32 rate_n_flags = iwl_v3_rate_from_v2_v3(rate_n_flags_fw,
- mld->fw_rates_ver_3);
+ u32 rate_n_flags =
+ iwl_v3_rate_from_v2_v3(rate_n_flags_fw, mld->fw_rates_ver_3);
u32 sgi = rate_n_flags & RATE_MCS_SGI_MSK;
u32 chan_width = rate_n_flags & RATE_MCS_CHAN_WIDTH_MSK;
u32 format = rate_n_flags & RATE_MCS_MOD_TYPE_MSK;
@@ -1042,10 +1031,9 @@ static void iwl_mld_hwrate_to_tx_rate(struct iwl_mld *mld,
tx_rate->idx = RATE_HT_MCS_INDEX(rate_n_flags);
break;
case RATE_MCS_MOD_TYPE_VHT:
- ieee80211_rate_set_vht(tx_rate,
- rate_n_flags & RATE_MCS_CODE_MSK,
- u32_get_bits(rate_n_flags,
- RATE_MCS_NSS_MSK) + 1);
+ ieee80211_rate_set_vht(
+ tx_rate, rate_n_flags & RATE_MCS_CODE_MSK,
+ u32_get_bits(rate_n_flags, RATE_MCS_NSS_MSK) + 1);
tx_rate->flags |= IEEE80211_TX_RC_VHT_MCS;
break;
case RATE_MCS_MOD_TYPE_HE:
@@ -1056,9 +1044,8 @@ static void iwl_mld_hwrate_to_tx_rate(struct iwl_mld *mld,
tx_rate->idx = 0;
break;
default:
- tx_rate->idx =
- iwl_mld_legacy_hw_idx_to_mac80211_idx(rate_n_flags,
- band);
+ tx_rate->idx = iwl_mld_legacy_hw_idx_to_mac80211_idx(
+ rate_n_flags, band);
break;
}
}
@@ -1082,6 +1069,19 @@ void iwl_mld_handle_tx_resp_notif(struct iwl_mld *mld,
bool mgmt = false;
bool tx_failure = (status & TX_STATUS_MSK) != TX_STATUS_SUCCESS;
+ /* Firmware is dead — the TX response may contain corrupt SSN values
+ * from a dying firmware DMA. Processing it could cause
+ * iwl_trans_reclaim() to free the wrong TX queue entries, leading to
+ * skb use-after-free or double-free.
+ */
+ if (unlikely(test_bit(STATUS_FW_ERROR, &mld->trans->status))) {
+ WARN_ONCE(
+ 1,
+ "iwlwifi: TX resp notif (sta=%d txq=%d) after FW error\n",
+ sta_id, txq_id);
+ return;
+ }
+
if (IWL_FW_CHECK(mld, tx_resp->frame_count != 1,
"Invalid tx_resp notif frame_count (%d)\n",
tx_resp->frame_count))
@@ -1093,8 +1093,8 @@ void iwl_mld_handle_tx_resp_notif(struct iwl_mld *mld,
notif_size, pkt_len))
return;
- ssn = le32_to_cpup((__le32 *)agg_status +
- tx_resp->frame_count) & 0xFFFF;
+ ssn = le32_to_cpup((__le32 *)agg_status + tx_resp->frame_count) &
+ 0xFFFF;
__skb_queue_head_init(&skbs);
@@ -1112,7 +1112,8 @@ void iwl_mld_handle_tx_resp_notif(struct iwl_mld *mld,
memset(&info->status, 0, sizeof(info->status));
- info->flags &= ~(IEEE80211_TX_STAT_ACK | IEEE80211_TX_STAT_TX_FILTERED);
+ info->flags &= ~(IEEE80211_TX_STAT_ACK |
+ IEEE80211_TX_STAT_TX_FILTERED);
/* inform mac80211 about what happened with the frame */
switch (status & TX_STATUS_MSK) {
@@ -1149,10 +1150,11 @@ void iwl_mld_handle_tx_resp_notif(struct iwl_mld *mld,
ieee80211_tx_status_skb(mld->hw, skb);
}
- IWL_DEBUG_TX_REPLY(mld,
- "TXQ %d status 0x%08x ssn=%d initial_rate 0x%x retries %d\n",
- txq_id, status, ssn, le32_to_cpu(tx_resp->initial_rate),
- tx_resp->failure_frame);
+ IWL_DEBUG_TX_REPLY(
+ mld,
+ "TXQ %d status 0x%08x ssn=%d initial_rate 0x%x retries %d\n",
+ txq_id, status, ssn, le32_to_cpu(tx_resp->initial_rate),
+ tx_resp->failure_frame);
if (tx_failure && mgmt)
iwl_mld_toggle_tx_ant(mld, &mld->mgmt_tx_ant);
@@ -1168,9 +1170,8 @@ void iwl_mld_handle_tx_resp_notif(struct iwl_mld *mld,
/* This can happen if the TX cmd was sent before pre_rcu_remove
* but the TX response was received after
*/
- IWL_DEBUG_TX_REPLY(mld,
- "Got valid sta_id (%d) but sta is NULL\n",
- sta_id);
+ IWL_DEBUG_TX_REPLY(
+ mld, "Got valid sta_id (%d) but sta is NULL\n", sta_id);
goto out;
}
@@ -1246,8 +1247,7 @@ int iwl_mld_flush_link_sta_txqs(struct iwl_mld *mld, u32 fw_sta_id)
resp_len = iwl_rx_packet_payload_len(cmd.resp_pkt);
if (IWL_FW_CHECK(mld, resp_len != sizeof(*rsp),
- "Invalid TXPATH_FLUSH response len: %d\n",
- resp_len)) {
+ "Invalid TXPATH_FLUSH response len: %d\n", resp_len)) {
ret = -EIO;
goto free_rsp;
}
@@ -1273,16 +1273,14 @@ int iwl_mld_flush_link_sta_txqs(struct iwl_mld *mld, u32 fw_sta_id)
int read_after = le16_to_cpu(queue_info->read_after_flush);
int txq_id = le16_to_cpu(queue_info->queue_num);
- if (IWL_FW_CHECK(mld,
- txq_id >= ARRAY_SIZE(mld->fw_id_to_txq),
+ if (IWL_FW_CHECK(mld, txq_id >= ARRAY_SIZE(mld->fw_id_to_txq),
"Invalid txq id %d\n", txq_id))
continue;
- IWL_DEBUG_TX_QUEUES(mld,
- "tid %d txq_id %d read-before %d read-after %d\n",
- le16_to_cpu(queue_info->tid), txq_id,
- le16_to_cpu(queue_info->read_before_flush),
- read_after);
+ IWL_DEBUG_TX_QUEUES(
+ mld, "tid %d txq_id %d read-before %d read-after %d\n",
+ le16_to_cpu(queue_info->tid), txq_id,
+ le16_to_cpu(queue_info->read_before_flush), read_after);
iwl_mld_tx_reclaim_txq(mld, txq_id, read_after, true);
}
@@ -1312,8 +1310,7 @@ int iwl_mld_ensure_queue(struct iwl_mld *mld, struct ieee80211_txq *txq)
return ret;
}
-int iwl_mld_update_sta_txqs(struct iwl_mld *mld,
- struct ieee80211_sta *sta,
+int iwl_mld_update_sta_txqs(struct iwl_mld *mld, struct ieee80211_sta *sta,
u32 old_sta_mask, u32 new_sta_mask)
{
struct iwl_scd_queue_cfg_cmd cmd = {
@@ -1326,10 +1323,9 @@ int iwl_mld_update_sta_txqs(struct iwl_mld *mld,
for (int tid = 0; tid <= IWL_MAX_TID_COUNT; tid++) {
struct ieee80211_txq *txq =
- sta->txq[tid != IWL_MAX_TID_COUNT ?
- tid : IEEE80211_NUM_TIDS];
- struct iwl_mld_txq *mld_txq =
- iwl_mld_txq_from_mac80211(txq);
+ sta->txq[tid != IWL_MAX_TID_COUNT ? tid :
+ IEEE80211_NUM_TIDS];
+ struct iwl_mld_txq *mld_txq = iwl_mld_txq_from_mac80211(txq);
int ret;
if (!mld_txq->status.allocated)
@@ -1340,10 +1336,9 @@ int iwl_mld_update_sta_txqs(struct iwl_mld *mld,
else
cmd.u.modify.tid = cpu_to_le32(tid);
- ret = iwl_mld_send_cmd_pdu(mld,
- WIDE_ID(DATA_PATH_GROUP,
- SCD_QUEUE_CONFIG_CMD),
- &cmd);
+ ret = iwl_mld_send_cmd_pdu(
+ mld, WIDE_ID(DATA_PATH_GROUP, SCD_QUEUE_CONFIG_CMD),
+ &cmd);
if (ret)
return ret;
}
@@ -1360,27 +1355,32 @@ void iwl_mld_handle_compressed_ba_notif(struct iwl_mld *mld,
u8 sta_id = ba_res->sta_id;
struct ieee80211_link_sta *link_sta;
+ if (unlikely(test_bit(STATUS_FW_ERROR, &mld->trans->status))) {
+ WARN_ONCE(1, "iwlwifi: BA notif (sta=%d) after FW error\n",
+ sta_id);
+ return;
+ }
+
if (!tfd_cnt)
return;
if (IWL_FW_CHECK(mld, struct_size(ba_res, tfd, tfd_cnt) > pkt_len,
- "Short BA notif (tfd_cnt=%d, size:0x%x)\n",
- tfd_cnt, pkt_len))
+ "Short BA notif (tfd_cnt=%d, size:0x%x)\n", tfd_cnt,
+ pkt_len))
return;
- IWL_DEBUG_TX_REPLY(mld,
- "BA notif received from sta_id=%d, flags=0x%x, sent:%d, acked:%d\n",
- sta_id, le32_to_cpu(ba_res->flags),
- le16_to_cpu(ba_res->txed),
- le16_to_cpu(ba_res->done));
+ IWL_DEBUG_TX_REPLY(
+ mld,
+ "BA notif received from sta_id=%d, flags=0x%x, sent:%d, acked:%d\n",
+ sta_id, le32_to_cpu(ba_res->flags), le16_to_cpu(ba_res->txed),
+ le16_to_cpu(ba_res->done));
for (int i = 0; i < tfd_cnt; i++) {
struct iwl_compressed_ba_tfd *ba_tfd = &ba_res->tfd[i];
int txq_id = le16_to_cpu(ba_tfd->q_num);
int index = le16_to_cpu(ba_tfd->tfd_index);
- if (IWL_FW_CHECK(mld,
- txq_id >= ARRAY_SIZE(mld->fw_id_to_txq),
+ if (IWL_FW_CHECK(mld, txq_id >= ARRAY_SIZE(mld->fw_id_to_txq),
"Invalid txq id %d\n", txq_id))
continue;
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/rx.c
index fe263cdc2e4f..554c22777ec1 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/rx.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/rx.c
@@ -151,8 +151,8 @@ int iwl_pcie_rx_stop(struct iwl_trans *trans)
RXF_DMA_IDLE, RXF_DMA_IDLE, 1000);
} else if (trans->mac_cfg->mq_rx_supported) {
iwl_write_prph(trans, RFH_RXF_DMA_CFG, 0);
- return iwl_poll_prph_bit(trans, RFH_GEN_STATUS,
- RXF_DMA_IDLE, RXF_DMA_IDLE, 1000);
+ return iwl_poll_prph_bit(trans, RFH_GEN_STATUS, RXF_DMA_IDLE,
+ RXF_DMA_IDLE, 1000);
} else {
iwl_write_direct32(trans, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
return iwl_poll_direct_bit(trans, FH_MEM_RSSR_RX_STATUS_REG,
@@ -181,8 +181,10 @@ static void iwl_pcie_rxq_inc_wr_ptr(struct iwl_trans *trans,
reg = iwl_read32(trans, CSR_UCODE_DRV_GP1);
if (reg & CSR_UCODE_DRV_GP1_BIT_MAC_SLEEP) {
- IWL_DEBUG_INFO(trans, "Rx queue requesting wakeup, GP1 = 0x%x\n",
- reg);
+ IWL_DEBUG_INFO(
+ trans,
+ "Rx queue requesting wakeup, GP1 = 0x%x\n",
+ reg);
iwl_set_bit(trans, CSR_GP_CNTRL,
CSR_GP_CNTRL_REG_FLAG_MAC_ACCESS_REQ);
rxq->need_update = true;
@@ -194,8 +196,8 @@ static void iwl_pcie_rxq_inc_wr_ptr(struct iwl_trans *trans,
if (!trans->mac_cfg->mq_rx_supported)
iwl_write32(trans, FH_RSCSR_CHNL0_WPTR, rxq->write_actual);
else if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ)
- iwl_write32(trans, HBUS_TARG_WRPTR, rxq->write_actual |
- HBUS_TARG_WRPTR_RX_Q(rxq->id));
+ iwl_write32(trans, HBUS_TARG_WRPTR,
+ rxq->write_actual | HBUS_TARG_WRPTR_RX_Q(rxq->id));
else
iwl_write32(trans, RFH_Q_FRBDCB_WIDX_TRG(rxq->id),
rxq->write_actual);
@@ -218,8 +220,7 @@ static void iwl_pcie_rxq_check_wrptr(struct iwl_trans *trans)
}
}
-static void iwl_pcie_restock_bd(struct iwl_trans *trans,
- struct iwl_rxq *rxq,
+static void iwl_pcie_restock_bd(struct iwl_trans *trans, struct iwl_rxq *rxq,
struct iwl_rx_mem_buffer *rxb)
{
if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) {
@@ -242,8 +243,7 @@ static void iwl_pcie_restock_bd(struct iwl_trans *trans,
/*
* iwl_pcie_rxmq_restock - restock implementation for multi-queue rx
*/
-static void iwl_pcie_rxmq_restock(struct iwl_trans *trans,
- struct iwl_rxq *rxq)
+static void iwl_pcie_rxmq_restock(struct iwl_trans *trans, struct iwl_rxq *rxq)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
struct iwl_rx_mem_buffer *rxb;
@@ -289,8 +289,7 @@ static void iwl_pcie_rxmq_restock(struct iwl_trans *trans,
/*
* iwl_pcie_rxsq_restock - restock implementation for single queue rx
*/
-static void iwl_pcie_rxsq_restock(struct iwl_trans *trans,
- struct iwl_rxq *rxq)
+static void iwl_pcie_rxsq_restock(struct iwl_trans *trans, struct iwl_rxq *rxq)
{
struct iwl_rx_mem_buffer *rxb;
@@ -346,8 +345,7 @@ static void iwl_pcie_rxsq_restock(struct iwl_trans *trans,
* also updates the memory address in the firmware to reference the new
* target buffer.
*/
-static
-void iwl_pcie_rxq_restock(struct iwl_trans *trans, struct iwl_rxq *rxq)
+static void iwl_pcie_rxq_restock(struct iwl_trans *trans, struct iwl_rxq *rxq)
{
if (trans->mac_cfg->mq_rx_supported)
iwl_pcie_rxmq_restock(trans, rxq);
@@ -359,8 +357,8 @@ void iwl_pcie_rxq_restock(struct iwl_trans *trans, struct iwl_rxq *rxq)
* iwl_pcie_rx_alloc_page - allocates and returns a page.
*
*/
-static struct page *iwl_pcie_rx_alloc_page(struct iwl_trans *trans,
- u32 *offset, gfp_t priority)
+static struct page *iwl_pcie_rx_alloc_page(struct iwl_trans *trans, u32 *offset,
+ gfp_t priority)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
unsigned int allocsize = PAGE_SIZE << trans_pcie->rx_page_order;
@@ -399,8 +397,7 @@ static struct page *iwl_pcie_rx_alloc_page(struct iwl_trans *trans,
* buffers.
*/
if (!(gfp_mask & __GFP_NOWARN) && net_ratelimit())
- IWL_CRIT(trans,
- "Failed to alloc_pages\n");
+ IWL_CRIT(trans, "Failed to alloc_pages\n");
return NULL;
}
@@ -464,10 +461,9 @@ void iwl_pcie_rxq_alloc_rbs(struct iwl_trans *trans, gfp_t priority,
rxb->page = page;
rxb->offset = offset;
/* Get physical address of the RB */
- rxb->page_dma =
- dma_map_page(trans->dev, page, rxb->offset,
- trans_pcie->rx_buf_bytes,
- DMA_FROM_DEVICE);
+ rxb->page_dma = dma_map_page(trans->dev, page, rxb->offset,
+ trans_pcie->rx_buf_bytes,
+ DMA_FROM_DEVICE);
if (dma_mapping_error(trans->dev, rxb->page_dma)) {
rxb->page = NULL;
spin_lock_bh(&rxq->lock);
@@ -579,9 +575,10 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
if (!pending) {
pending = atomic_read(&rba->req_pending);
if (pending)
- IWL_DEBUG_TPT(trans,
- "Got more pending allocation requests = %d\n",
- pending);
+ IWL_DEBUG_TPT(
+ trans,
+ "Got more pending allocation requests = %d\n",
+ pending);
}
spin_lock_bh(&rba->lock);
@@ -592,7 +589,6 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
spin_unlock_bh(&rba->lock);
atomic_inc(&rba->req_ready);
-
}
spin_lock_bh(&rba->lock);
@@ -634,9 +630,8 @@ static void iwl_pcie_rx_allocator_get(struct iwl_trans *trans,
spin_lock(&rba->lock);
for (i = 0; i < RX_CLAIM_REQ_ALLOC; i++) {
/* Get next free Rx buffer, remove it from free list */
- struct iwl_rx_mem_buffer *rxb =
- list_first_entry(&rba->rbd_allocated,
- struct iwl_rx_mem_buffer, list);
+ struct iwl_rx_mem_buffer *rxb = list_first_entry(
+ &rba->rbd_allocated, struct iwl_rx_mem_buffer, list);
list_move(&rxb->list, &rxq->rx_free);
}
@@ -661,8 +656,8 @@ static int iwl_pcie_free_bd_size(struct iwl_trans *trans)
if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
return sizeof(struct iwl_rx_transfer_desc);
- return trans->mac_cfg->mq_rx_supported ?
- sizeof(__le64) : sizeof(__le32);
+ return trans->mac_cfg->mq_rx_supported ? sizeof(__le64) :
+ sizeof(__le32);
}
static int iwl_pcie_used_bd_size(struct iwl_trans *trans)
@@ -676,14 +671,12 @@ static int iwl_pcie_used_bd_size(struct iwl_trans *trans)
return sizeof(__le32);
}
-static void iwl_pcie_free_rxq_dma(struct iwl_trans *trans,
- struct iwl_rxq *rxq)
+static void iwl_pcie_free_rxq_dma(struct iwl_trans *trans, struct iwl_rxq *rxq)
{
int free_size = iwl_pcie_free_bd_size(trans);
if (rxq->bd)
- dma_free_coherent(trans->dev,
- free_size * rxq->queue_size,
+ dma_free_coherent(trans->dev, free_size * rxq->queue_size,
rxq->bd, rxq->bd_dma);
rxq->bd_dma = 0;
rxq->bd = NULL;
@@ -694,7 +687,7 @@ static void iwl_pcie_free_rxq_dma(struct iwl_trans *trans,
if (rxq->used_bd)
dma_free_coherent(trans->dev,
iwl_pcie_used_bd_size(trans) *
- rxq->queue_size,
+ rxq->queue_size,
rxq->used_bd, rxq->used_bd_dma);
rxq->used_bd_dma = 0;
rxq->used_bd = NULL;
@@ -702,8 +695,8 @@ static void iwl_pcie_free_rxq_dma(struct iwl_trans *trans,
static size_t iwl_pcie_rb_stts_size(struct iwl_trans *trans)
{
- bool use_rx_td = (trans->mac_cfg->device_family >=
- IWL_DEVICE_FAMILY_AX210);
+ bool use_rx_td =
+ (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_AX210);
if (use_rx_td)
return sizeof(__le16);
@@ -711,8 +704,7 @@ static size_t iwl_pcie_rb_stts_size(struct iwl_trans *trans)
return sizeof(struct iwl_rb_status);
}
-static int iwl_pcie_alloc_rxq_dma(struct iwl_trans *trans,
- struct iwl_rxq *rxq)
+static int iwl_pcie_alloc_rxq_dma(struct iwl_trans *trans, struct iwl_rxq *rxq)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
size_t rb_stts_size = iwl_pcie_rb_stts_size(trans);
@@ -738,11 +730,9 @@ static int iwl_pcie_alloc_rxq_dma(struct iwl_trans *trans,
goto err;
if (trans->mac_cfg->mq_rx_supported) {
- rxq->used_bd = dma_alloc_coherent(dev,
- iwl_pcie_used_bd_size(trans) *
- rxq->queue_size,
- &rxq->used_bd_dma,
- GFP_KERNEL);
+ rxq->used_bd = dma_alloc_coherent(
+ dev, iwl_pcie_used_bd_size(trans) * rxq->queue_size,
+ &rxq->used_bd_dma, GFP_KERNEL);
if (!rxq->used_bd)
goto err;
}
@@ -774,8 +764,8 @@ static int iwl_pcie_rx_alloc(struct iwl_trans *trans)
return -EINVAL;
trans_pcie->rxq = kzalloc_objs(struct iwl_rxq, trans->info.num_rxqs);
- trans_pcie->rx_pool = kzalloc_objs(trans_pcie->rx_pool[0],
- RX_POOL_SIZE(trans_pcie->num_rx_bufs));
+ trans_pcie->rx_pool = kzalloc_objs(
+ trans_pcie->rx_pool[0], RX_POOL_SIZE(trans_pcie->num_rx_bufs));
trans_pcie->global_table =
kzalloc_objs(trans_pcie->global_table[0],
RX_POOL_SIZE(trans_pcie->num_rx_bufs));
@@ -791,11 +781,9 @@ static int iwl_pcie_rx_alloc(struct iwl_trans *trans)
* Allocate the driver's pointer to receive buffer status.
* Allocate for all queues continuously (HW requirement).
*/
- trans_pcie->base_rb_stts =
- dma_alloc_coherent(trans->dev,
- rb_stts_size * trans->info.num_rxqs,
- &trans_pcie->base_rb_stts_dma,
- GFP_KERNEL);
+ trans_pcie->base_rb_stts = dma_alloc_coherent(
+ trans->dev, rb_stts_size * trans->info.num_rxqs,
+ &trans_pcie->base_rb_stts_dma, GFP_KERNEL);
if (!trans_pcie->base_rb_stts) {
ret = -ENOMEM;
goto err;
@@ -868,8 +856,7 @@ static void iwl_pcie_rx_hw_init(struct iwl_trans *trans, struct iwl_rxq *rxq)
(u32)(rxq->bd_dma >> 8));
/* Tell device where in DRAM to update its Rx status */
- iwl_write32(trans, FH_RSCSR_CHNL0_STTS_WPTR_REG,
- rxq->rb_stts_dma >> 4);
+ iwl_write32(trans, FH_RSCSR_CHNL0_STTS_WPTR_REG, rxq->rb_stts_dma >> 4);
/* Enable Rx DMA
* FH_RCSR_CHNL0_RX_IGNORE_RXF_EMPTY is set because of HW bug in
@@ -881,11 +868,12 @@ static void iwl_pcie_rx_hw_init(struct iwl_trans *trans, struct iwl_rxq *rxq)
*/
iwl_write32(trans, FH_MEM_RCSR_CHNL0_CONFIG_REG,
FH_RCSR_RX_CONFIG_CHNL_EN_ENABLE_VAL |
- FH_RCSR_CHNL0_RX_IGNORE_RXF_EMPTY |
- FH_RCSR_CHNL0_RX_CONFIG_IRQ_DEST_INT_HOST_VAL |
- rb_size |
- (RX_RB_TIMEOUT << FH_RCSR_RX_CONFIG_REG_IRQ_RBTH_POS) |
- (rfdnlog << FH_RCSR_RX_CONFIG_RBDCB_SIZE_POS));
+ FH_RCSR_CHNL0_RX_IGNORE_RXF_EMPTY |
+ FH_RCSR_CHNL0_RX_CONFIG_IRQ_DEST_INT_HOST_VAL |
+ rb_size |
+ (RX_RB_TIMEOUT
+ << FH_RCSR_RX_CONFIG_REG_IRQ_RBTH_POS) |
+ (rfdnlog << FH_RCSR_RX_CONFIG_RBDCB_SIZE_POS));
iwl_trans_release_nic_access(trans);
@@ -931,16 +919,13 @@ static void iwl_pcie_rx_mq_hw_init(struct iwl_trans *trans)
for (i = 0; i < trans->info.num_rxqs; i++) {
/* Tell device where to find RBD free table in DRAM */
- iwl_write_prph64_no_grab(trans,
- RFH_Q_FRBDCB_BA_LSB(i),
+ iwl_write_prph64_no_grab(trans, RFH_Q_FRBDCB_BA_LSB(i),
trans_pcie->rxq[i].bd_dma);
/* Tell device where to find RBD used table in DRAM */
- iwl_write_prph64_no_grab(trans,
- RFH_Q_URBDCB_BA_LSB(i),
+ iwl_write_prph64_no_grab(trans, RFH_Q_URBDCB_BA_LSB(i),
trans_pcie->rxq[i].used_bd_dma);
/* Tell device where in DRAM to update its Rx status */
- iwl_write_prph64_no_grab(trans,
- RFH_Q_URBD_STTS_WPTR_LSB(i),
+ iwl_write_prph64_no_grab(trans, RFH_Q_URBD_STTS_WPTR_LSB(i),
trans_pcie->rxq[i].rb_stts_dma);
/* Reset device indice tables */
iwl_write_prph_no_grab(trans, RFH_Q_FRBDCB_WIDX(i), 0);
@@ -959,23 +944,24 @@ static void iwl_pcie_rx_mq_hw_init(struct iwl_trans *trans)
*/
iwl_write_prph_no_grab(trans, RFH_RXF_DMA_CFG,
RFH_DMA_EN_ENABLE_VAL | rb_size |
- RFH_RXF_DMA_MIN_RB_4_8 |
- RFH_RXF_DMA_DROP_TOO_LARGE_MASK |
- RFH_RXF_DMA_RBDCB_SIZE_512);
+ RFH_RXF_DMA_MIN_RB_4_8 |
+ RFH_RXF_DMA_DROP_TOO_LARGE_MASK |
+ RFH_RXF_DMA_RBDCB_SIZE_512);
/*
* Activate DMA snooping.
* Set RX DMA chunk size to 64B for IOSF and 128B for PCIe
* Default queue is 0
*/
- iwl_write_prph_no_grab(trans, RFH_GEN_CFG,
- RFH_GEN_CFG_RFH_DMA_SNOOP |
- RFH_GEN_CFG_VAL(DEFAULT_RXQ_NUM, 0) |
- RFH_GEN_CFG_SERVICE_DMA_SNOOP |
- RFH_GEN_CFG_VAL(RB_CHUNK_SIZE,
- trans->mac_cfg->integrated ?
- RFH_GEN_CFG_RB_CHUNK_SIZE_64 :
- RFH_GEN_CFG_RB_CHUNK_SIZE_128));
+ iwl_write_prph_no_grab(
+ trans, RFH_GEN_CFG,
+ RFH_GEN_CFG_RFH_DMA_SNOOP |
+ RFH_GEN_CFG_VAL(DEFAULT_RXQ_NUM, 0) |
+ RFH_GEN_CFG_SERVICE_DMA_SNOOP |
+ RFH_GEN_CFG_VAL(RB_CHUNK_SIZE,
+ trans->mac_cfg->integrated ?
+ RFH_GEN_CFG_RB_CHUNK_SIZE_64 :
+ RFH_GEN_CFG_RB_CHUNK_SIZE_128));
/* Enable the relevant rx queues */
iwl_write_prph_no_grab(trans, RFH_RXF_RXQ_ACTIVE, enabled);
@@ -997,7 +983,8 @@ void iwl_pcie_rx_init_rxb_lists(struct iwl_rxq *rxq)
static int iwl_pcie_rx_handle(struct iwl_trans *trans, int queue, int budget);
-static inline struct iwl_trans_pcie *iwl_netdev_to_trans_pcie(struct net_device *dev)
+static inline struct iwl_trans_pcie *
+iwl_netdev_to_trans_pcie(struct net_device *dev)
{
return *(struct iwl_trans_pcie **)netdev_priv(dev);
}
@@ -1012,10 +999,21 @@ static int iwl_pcie_napi_poll(struct napi_struct *napi, int budget)
trans_pcie = iwl_netdev_to_trans_pcie(napi->dev);
trans = trans_pcie->trans;
+ /* Stop processing RX if firmware has crashed. Stale notifications
+ * from dying firmware (e.g. TX completions with corrupt SSN values)
+ * can cause use-after-free in reclaim paths.
+ */
+ if (unlikely(test_bit(STATUS_FW_ERROR, &trans->status))) {
+ WARN_ONCE(1, "iwlwifi: NAPI poll[%d] invoked after FW error\n",
+ rxq->id);
+ napi_complete_done(napi, 0);
+ return 0;
+ }
+
ret = iwl_pcie_rx_handle(trans, rxq->id, budget);
- IWL_DEBUG_ISR(trans, "[%d] handled %d, budget %d\n",
- rxq->id, ret, budget);
+ IWL_DEBUG_ISR(trans, "[%d] handled %d, budget %d\n", rxq->id, ret,
+ budget);
if (ret < budget) {
spin_lock(&trans_pcie->irq_lock);
@@ -1039,6 +1037,15 @@ static int iwl_pcie_napi_poll_msix(struct napi_struct *napi, int budget)
trans_pcie = iwl_netdev_to_trans_pcie(napi->dev);
trans = trans_pcie->trans;
+ if (unlikely(test_bit(STATUS_FW_ERROR, &trans->status))) {
+ WARN_ONCE(
+ 1,
+ "iwlwifi: NAPI MSIX poll[%d] invoked after FW error\n",
+ rxq->id);
+ napi_complete_done(napi, 0);
+ return 0;
+ }
+
ret = iwl_pcie_rx_handle(trans, rxq->id, budget);
IWL_DEBUG_ISR(trans, "[%d] handled %d, budget %d\n", rxq->id, ret,
budget);
@@ -1121,30 +1128,31 @@ static int _iwl_pcie_rx_init(struct iwl_trans *trans)
memset(rxq->rb_stts, 0,
(trans->mac_cfg->device_family >=
IWL_DEVICE_FAMILY_AX210) ?
- sizeof(__le16) : sizeof(struct iwl_rb_status));
+ sizeof(__le16) :
+ sizeof(struct iwl_rb_status));
iwl_pcie_rx_init_rxb_lists(rxq);
spin_unlock_bh(&rxq->lock);
if (!rxq->napi.poll) {
- int (*poll)(struct napi_struct *, int) = iwl_pcie_napi_poll;
+ int (*poll)(struct napi_struct *, int) =
+ iwl_pcie_napi_poll;
if (trans_pcie->msix_enabled)
poll = iwl_pcie_napi_poll_msix;
- netif_napi_add(trans_pcie->napi_dev, &rxq->napi,
- poll);
+ netif_napi_add(trans_pcie->napi_dev, &rxq->napi, poll);
napi_enable(&rxq->napi);
}
-
}
/* move the pool to the default queue and allocator ownerships */
queue_size = trans->mac_cfg->mq_rx_supported ?
- trans_pcie->num_rx_bufs - 1 : RX_QUEUE_SIZE;
- allocator_pool_size = trans->info.num_rxqs *
- (RX_CLAIM_REQ_ALLOC - RX_POST_REQ_ALLOC);
+ trans_pcie->num_rx_bufs - 1 :
+ RX_QUEUE_SIZE;
+ allocator_pool_size =
+ trans->info.num_rxqs * (RX_CLAIM_REQ_ALLOC - RX_POST_REQ_ALLOC);
num_alloc = queue_size + allocator_pool_size;
for (i = 0; i < num_alloc; i++) {
@@ -1291,11 +1299,9 @@ static void iwl_pcie_rx_reuse_rbd(struct iwl_trans *trans,
}
}
-static void iwl_pcie_rx_handle_rb(struct iwl_trans *trans,
- struct iwl_rxq *rxq,
- struct iwl_rx_mem_buffer *rxb,
- bool emergency,
- int i)
+static void iwl_pcie_rx_handle_rb(struct iwl_trans *trans, struct iwl_rxq *rxq,
+ struct iwl_rx_mem_buffer *rxb, bool emergency,
+ int i)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
struct iwl_txq *txq = trans_pcie->txqs.txq[trans->conf.cmd_queue];
@@ -1330,19 +1336,21 @@ static void iwl_pcie_rx_handle_rb(struct iwl_trans *trans,
}
WARN((le32_to_cpu(pkt->len_n_flags) & FH_RSCSR_RXQ_MASK) >>
- FH_RSCSR_RXQ_POS != rxq->id,
+ FH_RSCSR_RXQ_POS !=
+ rxq->id,
"frame on invalid queue - is on %d and indicates %d\n",
rxq->id,
(le32_to_cpu(pkt->len_n_flags) & FH_RSCSR_RXQ_MASK) >>
- FH_RSCSR_RXQ_POS);
+ FH_RSCSR_RXQ_POS);
- IWL_DEBUG_RX(trans,
- "Q %d: cmd at offset %d: %s (%.2x.%2x, seq 0x%x)\n",
- rxq->id, offset,
- iwl_get_cmd_string(trans,
- WIDE_ID(pkt->hdr.group_id, pkt->hdr.cmd)),
- pkt->hdr.group_id, pkt->hdr.cmd,
- le16_to_cpu(pkt->hdr.sequence));
+ IWL_DEBUG_RX(
+ trans,
+ "Q %d: cmd at offset %d: %s (%.2x.%2x, seq 0x%x)\n",
+ rxq->id, offset,
+ iwl_get_cmd_string(trans, WIDE_ID(pkt->hdr.group_id,
+ pkt->hdr.cmd)),
+ pkt->hdr.group_id, pkt->hdr.cmd,
+ le16_to_cpu(pkt->hdr.sequence));
len = iwl_rx_packet_len(pkt);
len += sizeof(u32); /* account for status word */
@@ -1367,7 +1375,7 @@ static void iwl_pcie_rx_handle_rb(struct iwl_trans *trans,
for (i = 0; i < trans->conf.n_no_reclaim_cmds; i++) {
if (trans->conf.no_reclaim_cmds[i] ==
- pkt->hdr.cmd) {
+ pkt->hdr.cmd) {
reclaim = false;
break;
}
@@ -1375,11 +1383,10 @@ static void iwl_pcie_rx_handle_rb(struct iwl_trans *trans,
}
if (rxq->id == IWL_DEFAULT_RX_QUEUE)
- iwl_op_mode_rx(trans->op_mode, &rxq->napi,
- &rxcb);
+ iwl_op_mode_rx(trans->op_mode, &rxq->napi, &rxcb);
else
- iwl_op_mode_rx_rss(trans->op_mode, &rxq->napi,
- &rxcb, rxq->id);
+ iwl_op_mode_rx_rss(trans->op_mode, &rxq->napi, &rxcb,
+ rxq->id);
/*
* After here, we should always check rxcb._page_stolen,
@@ -1419,10 +1426,9 @@ static void iwl_pcie_rx_handle_rb(struct iwl_trans *trans,
* SKBs that fail to Rx correctly, add them back into the
* rx_free list for reuse later. */
if (rxb->page != NULL) {
- rxb->page_dma =
- dma_map_page(trans->dev, rxb->page, rxb->offset,
- trans_pcie->rx_buf_bytes,
- DMA_FROM_DEVICE);
+ rxb->page_dma = dma_map_page(trans->dev, rxb->page, rxb->offset,
+ trans_pcie->rx_buf_bytes,
+ DMA_FROM_DEVICE);
if (dma_mapping_error(trans->dev, rxb->page_dma)) {
/*
* free the page(s) as well to not break
@@ -1534,9 +1540,10 @@ static int iwl_pcie_rx_handle(struct iwl_trans *trans, int queue, int budget)
!emergency)) {
iwl_pcie_rx_move_to_allocator(rxq, rba);
emergency = true;
- IWL_DEBUG_TPT(trans,
- "RX path is in emergency. Pending allocations %d\n",
- rb_pending_alloc);
+ IWL_DEBUG_TPT(
+ trans,
+ "RX path is in emergency. Pending allocations %d\n",
+ rb_pending_alloc);
}
IWL_DEBUG_RX(trans, "Q %d: HW = %d, SW = %d\n", rxq->id, r, i);
@@ -1585,9 +1592,10 @@ static int iwl_pcie_rx_handle(struct iwl_trans *trans, int queue, int budget)
if (count == 8) {
count = 0;
if (rb_pending_alloc < rxq->queue_size / 3) {
- IWL_DEBUG_TPT(trans,
- "RX path exited emergency. Pending allocations %d\n",
- rb_pending_alloc);
+ IWL_DEBUG_TPT(
+ trans,
+ "RX path exited emergency. Pending allocations %d\n",
+ rb_pending_alloc);
emergency = false;
}
@@ -1682,9 +1690,9 @@ static void iwl_pcie_irq_handle_error(struct iwl_trans *trans)
if (trans->cfg->internal_wimax_coex &&
!trans->mac_cfg->base->apmg_not_supported &&
(!(iwl_read_prph(trans, APMG_CLK_CTRL_REG) &
- APMS_CLK_VAL_MRB_FUNC_MODE) ||
+ APMS_CLK_VAL_MRB_FUNC_MODE) ||
(iwl_read_prph(trans, APMG_PS_CTRL_REG) &
- APMG_PS_CTRL_VAL_RESET_REQ))) {
+ APMG_PS_CTRL_VAL_RESET_REQ))) {
clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status);
iwl_op_mode_wimax_active(trans->op_mode);
wake_up(&trans_pcie->wait_command_queue);
@@ -1730,9 +1738,9 @@ static u32 iwl_pcie_int_cause_non_ict(struct iwl_trans *trans)
}
/* a device (PCI-E) page is 4096 bytes long */
-#define ICT_SHIFT 12
-#define ICT_SIZE (1 << ICT_SHIFT)
-#define ICT_COUNT (ICT_SIZE / sizeof(u32))
+#define ICT_SHIFT 12
+#define ICT_SIZE (1 << ICT_SHIFT)
+#define ICT_COUNT (ICT_SIZE / sizeof(u32))
/* interrupt handler using ict table, with this interrupt driver will
* stop using INTA register to get device's interrupt, reading this register
@@ -1766,7 +1774,7 @@ static u32 iwl_pcie_int_cause_ict(struct iwl_trans *trans)
do {
val |= read;
IWL_DEBUG_ISR(trans, "ICT index %d value 0x%08X\n",
- trans_pcie->ict_index, read);
+ trans_pcie->ict_index, read);
trans_pcie->ict_tbl[trans_pcie->ict_index] = 0;
trans_pcie->ict_index =
((trans_pcie->ict_index + 1) & (ICT_COUNT - 1));
@@ -1822,8 +1830,7 @@ void iwl_pcie_handle_rfkill_irq(struct iwl_trans *trans, bool from_irq)
mutex_unlock(&trans_pcie->mutex);
if (hw_rfkill) {
- if (test_and_clear_bit(STATUS_SYNC_HCMD_ACTIVE,
- &trans->status))
+ if (test_and_clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status))
IWL_DEBUG_RF_KILL(trans,
"Rfkill while SYNC HCMD in flight\n");
wake_up(&trans_pcie->wait_command_queue);
@@ -1866,9 +1873,8 @@ static void iwl_trans_pcie_handle_reset_interrupt(struct iwl_trans *trans)
}
fallthrough;
case CSR_IPC_STATE_RESET_NONE:
- IWL_FW_CHECK_FAILED(trans,
- "Invalid reset interrupt (state=%d)!\n",
- state);
+ IWL_FW_CHECK_FAILED(
+ trans, "Invalid reset interrupt (state=%d)!\n", state);
break;
case CSR_IPC_STATE_RESET_TOP_FOLLOWER:
if (trans_pcie->fw_reset_state == FW_RESET_REQUESTED) {
@@ -1909,11 +1915,12 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
inta = iwl_pcie_int_cause_non_ict(trans);
if (iwl_have_debug_level(IWL_DL_ISR)) {
- IWL_DEBUG_ISR(trans,
- "ISR inta 0x%08x, enabled 0x%08x(sw), enabled(hw) 0x%08x, fh 0x%08x\n",
- inta, trans_pcie->inta_mask,
- iwl_read32(trans, CSR_INT_MASK),
- iwl_read32(trans, CSR_FH_INT_STATUS));
+ IWL_DEBUG_ISR(
+ trans,
+ "ISR inta 0x%08x, enabled 0x%08x(sw), enabled(hw) 0x%08x, fh 0x%08x\n",
+ inta, trans_pcie->inta_mask,
+ iwl_read32(trans, CSR_INT_MASK),
+ iwl_read32(trans, CSR_FH_INT_STATUS));
if (inta & (~trans_pcie->inta_mask))
IWL_DEBUG_ISR(trans,
"We got a masked interrupt (0x%08x)\n",
@@ -1964,8 +1971,8 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
iwl_write32(trans, CSR_INT, inta | ~trans_pcie->inta_mask);
if (iwl_have_debug_level(IWL_DL_ISR))
- IWL_DEBUG_ISR(trans, "inta 0x%08x, enabled 0x%08x\n",
- inta, iwl_read32(trans, CSR_INT_MASK));
+ IWL_DEBUG_ISR(trans, "inta 0x%08x, enabled 0x%08x\n", inta,
+ iwl_read32(trans, CSR_INT_MASK));
spin_unlock_bh(&trans_pcie->irq_lock);
@@ -1986,8 +1993,9 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
/* NIC fires this, but we don't use it, redundant with WAKEUP */
if (inta & CSR_INT_BIT_SCD) {
- IWL_DEBUG_ISR(trans,
- "Scheduler finished to transmit the frame/frames.\n");
+ IWL_DEBUG_ISR(
+ trans,
+ "Scheduler finished to transmit the frame/frames.\n");
isr_stats->sch++;
}
@@ -2029,8 +2037,10 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
/* Error detected by uCode */
if (inta & CSR_INT_BIT_SW_ERR) {
- IWL_ERR(trans, "Microcode SW error detected. "
- " Restarting 0x%X.\n", inta);
+ IWL_ERR(trans,
+ "Microcode SW error detected. "
+ " Restarting 0x%X.\n",
+ inta);
isr_stats->sw++;
if (trans_pcie->fw_reset_state == FW_RESET_REQUESTED) {
trans_pcie->fw_reset_state = FW_RESET_ERROR;
@@ -2055,18 +2065,17 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
/* All uCode command responses, including Tx command responses,
* Rx "responses" (frame-received notification), and other
* notifications from uCode come through here*/
- if (inta & (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX |
- CSR_INT_BIT_RX_PERIODIC)) {
+ if (inta &
+ (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX | CSR_INT_BIT_RX_PERIODIC)) {
IWL_DEBUG_ISR(trans, "Rx interrupt\n");
if (inta & (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX)) {
handled |= (CSR_INT_BIT_FH_RX | CSR_INT_BIT_SW_RX);
iwl_write32(trans, CSR_FH_INT_STATUS,
- CSR_FH_INT_RX_MASK);
+ CSR_FH_INT_RX_MASK);
}
if (inta & CSR_INT_BIT_RX_PERIODIC) {
handled |= CSR_INT_BIT_RX_PERIODIC;
- iwl_write32(trans,
- CSR_INT, CSR_INT_BIT_RX_PERIODIC);
+ iwl_write32(trans, CSR_INT, CSR_INT_BIT_RX_PERIODIC);
}
/* Sending RX interrupt require many steps to be done in the
* device:
@@ -2080,8 +2089,7 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
*/
/* Disable periodic interrupt; we use it as just a one-shot. */
- iwl_write8(trans, CSR_INT_PERIODIC_REG,
- CSR_INT_PERIODIC_DIS);
+ iwl_write8(trans, CSR_INT_PERIODIC_REG, CSR_INT_PERIODIC_DIS);
/*
* Enable periodic interrupt in 8 msec only if we received
@@ -2164,8 +2172,7 @@ void iwl_pcie_free_ict(struct iwl_trans *trans)
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
if (trans_pcie->ict_tbl) {
- dma_free_coherent(trans->dev, ICT_SIZE,
- trans_pcie->ict_tbl,
+ dma_free_coherent(trans->dev, ICT_SIZE, trans_pcie->ict_tbl,
trans_pcie->ict_tbl_dma);
trans_pcie->ict_tbl = NULL;
trans_pcie->ict_tbl_dma = 0;
@@ -2181,9 +2188,8 @@ int iwl_pcie_alloc_ict(struct iwl_trans *trans)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
- trans_pcie->ict_tbl =
- dma_alloc_coherent(trans->dev, ICT_SIZE,
- &trans_pcie->ict_tbl_dma, GFP_KERNEL);
+ trans_pcie->ict_tbl = dma_alloc_coherent(
+ trans->dev, ICT_SIZE, &trans_pcie->ict_tbl_dma, GFP_KERNEL);
if (!trans_pcie->ict_tbl)
return -ENOMEM;
@@ -2214,8 +2220,7 @@ void iwl_pcie_reset_ict(struct iwl_trans *trans)
val = trans_pcie->ict_tbl_dma >> ICT_SHIFT;
- val |= CSR_DRAM_INT_TBL_ENABLE |
- CSR_DRAM_INIT_TBL_WRAP_CHECK |
+ val |= CSR_DRAM_INT_TBL_ENABLE | CSR_DRAM_INIT_TBL_WRAP_CHECK |
CSR_DRAM_INIT_TBL_WRITE_POINTER;
IWL_DEBUG_ISR(trans, "CSR_DRAM_INT_TBL_REG =0x%x\n", val);
@@ -2298,10 +2303,11 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
}
if (iwl_have_debug_level(IWL_DL_ISR)) {
- IWL_DEBUG_ISR(trans,
- "ISR[%d] inta_fh 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
- entry->entry, inta_fh, trans_pcie->fh_mask,
- iwl_read32(trans, CSR_MSIX_FH_INT_MASK_AD));
+ IWL_DEBUG_ISR(
+ trans,
+ "ISR[%d] inta_fh 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
+ entry->entry, inta_fh, trans_pcie->fh_mask,
+ iwl_read32(trans, CSR_MSIX_FH_INT_MASK_AD));
if (inta_fh & ~trans_pcie->fh_mask)
IWL_DEBUG_ISR(trans,
"We got a masked interrupt (0x%08x)\n",
@@ -2400,10 +2406,11 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
/* After checking FH register check HW register */
if (iwl_have_debug_level(IWL_DL_ISR)) {
- IWL_DEBUG_ISR(trans,
- "ISR[%d] inta_hw 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
- entry->entry, inta_hw, trans_pcie->hw_mask,
- iwl_read32(trans, CSR_MSIX_HW_INT_MASK_AD));
+ IWL_DEBUG_ISR(
+ trans,
+ "ISR[%d] inta_hw 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n",
+ entry->entry, inta_hw, trans_pcie->hw_mask,
+ iwl_read32(trans, CSR_MSIX_HW_INT_MASK_AD));
if (inta_hw & ~trans_pcie->hw_mask)
IWL_DEBUG_ISR(trans,
"We got a masked interrupt 0x%08x\n",
@@ -2433,9 +2440,10 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
if (sleep_notif == IWL_D3_SLEEP_STATUS_SUSPEND ||
sleep_notif == IWL_D3_SLEEP_STATUS_RESUME) {
- IWL_DEBUG_ISR(trans,
- "Sx interrupt: sleep notification = 0x%x\n",
- sleep_notif);
+ IWL_DEBUG_ISR(
+ trans,
+ "Sx interrupt: sleep notification = 0x%x\n",
+ sleep_notif);
if (trans_pcie->sx_state == IWL_SX_WAITING) {
trans_pcie->sx_state = IWL_SX_COMPLETE;
wake_up(&trans_pcie->sx_waitq);
@@ -2465,8 +2473,7 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
iwl_pcie_handle_rfkill_irq(trans, true);
if (inta_hw & MSIX_HW_INT_CAUSES_REG_HW_ERR) {
- IWL_ERR(trans,
- "Hardware error detected. Restarting.\n");
+ IWL_ERR(trans, "Hardware error detected. Restarting.\n");
isr_stats->hw++;
trans->dbg.hw_error = true;
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/3] wifi: iwlwifi: mld: fix TSO segmentation explosion when AMSDU is disabled
2026-04-05 5:41 [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware error Cole Leavitt
2026-04-05 5:41 ` [PATCH 1/3] wifi: iwlwifi: prevent NAPI processing after " Cole Leavitt
@ 2026-04-05 5:41 ` Cole Leavitt
2026-04-12 3:47 ` Korenblit, Miriam Rachel
2026-04-05 5:41 ` [PATCH 3/3] wifi: iwlwifi: mld: skip TX when firmware is dead Cole Leavitt
2026-04-14 3:51 ` [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware error Korenblit, Miriam Rachel
3 siblings, 1 reply; 7+ messages in thread
From: Cole Leavitt @ 2026-04-05 5:41 UTC (permalink / raw)
To: linux-wireless; +Cc: greearb, miriam.rachel.korenblit, johannes, cole
When the TLC notification disables AMSDU for a TID, the MLD driver sets
max_tid_amsdu_len to the sentinel value 1. The TSO segmentation path in
iwl_mld_tx_tso_segment() checks for zero but not for this sentinel,
allowing it to reach the num_subframes calculation:
num_subframes = (max_tid_amsdu_len + pad) / (subf_len + pad)
= (1 + 2) / (1534 + 2) = 0
This zero propagates to iwl_tx_tso_segment() which sets:
gso_size = num_subframes * mss = 0
Calling skb_gso_segment() with gso_size=0 creates over 32000 tiny
segments from a single GSO skb. This floods the TX ring with ~1024
micro-frames (the rest are purged), creating a massive burst of TX
completion events that can lead to memory corruption and a subsequent
use-after-free in TCP's retransmit queue (refcount underflow in
tcp_shifted_skb, NULL deref in tcp_rack_detect_loss).
The MVM driver is immune because it checks mvmsta->amsdu_enabled before
reaching the num_subframes calculation. The MLD driver has no equivalent
bitmap check and relies solely on max_tid_amsdu_len, which does not
catch the sentinel value.
Fix this by detecting the sentinel value (max_tid_amsdu_len == 1) at the
existing check and falling back to non-AMSDU TSO segmentation. Also add
a WARN_ON_ONCE guard after the num_subframes division as defense-in-depth
to catch any future code paths that produce zero through a different
mechanism.
Suggested-by: Miriam Rachel Korenblit <miriam.rachel.korenblit@intel.com>
Fixes: d1e879ec600f ("wifi: iwlwifi: add iwlmld sub-driver")
Signed-off-by: Cole Leavitt <cole@unwrap.rs>
---
drivers/net/wireless/intel/iwlwifi/mld/tx.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/intel/iwlwifi/mld/tx.c b/drivers/net/wireless/intel/iwlwifi/mld/tx.c
index e341d12e5233..8af58aabcd68 100644
--- a/drivers/net/wireless/intel/iwlwifi/mld/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/mld/tx.c
@@ -823,7 +823,7 @@ static int iwl_mld_tx_tso_segment(struct iwl_mld *mld, struct sk_buff *skb,
return -EINVAL;
max_tid_amsdu_len = sta->cur->max_tid_amsdu_len[tid];
- if (!max_tid_amsdu_len)
+ if (!max_tid_amsdu_len || max_tid_amsdu_len == 1)
return iwl_tx_tso_segment(skb, 1, netdev_flags, mpdus_skbs);
/* Sub frame header + SNAP + IP header + TCP header + MSS */
@@ -835,6 +835,9 @@ static int iwl_mld_tx_tso_segment(struct iwl_mld *mld, struct sk_buff *skb,
*/
num_subframes = (max_tid_amsdu_len + pad) / (subf_len + pad);
+ if (WARN_ON_ONCE(!num_subframes))
+ return iwl_tx_tso_segment(skb, 1, netdev_flags, mpdus_skbs);
+
if (sta->max_amsdu_subframes &&
num_subframes > sta->max_amsdu_subframes)
num_subframes = sta->max_amsdu_subframes;
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 3/3] wifi: iwlwifi: mld: skip TX when firmware is dead
2026-04-05 5:41 [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware error Cole Leavitt
2026-04-05 5:41 ` [PATCH 1/3] wifi: iwlwifi: prevent NAPI processing after " Cole Leavitt
2026-04-05 5:41 ` [PATCH 2/3] wifi: iwlwifi: mld: fix TSO segmentation explosion when AMSDU is disabled Cole Leavitt
@ 2026-04-05 5:41 ` Cole Leavitt
2026-04-14 3:51 ` [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware error Korenblit, Miriam Rachel
3 siblings, 0 replies; 7+ messages in thread
From: Cole Leavitt @ 2026-04-05 5:41 UTC (permalink / raw)
To: linux-wireless; +Cc: greearb, miriam.rachel.korenblit, johannes, cole
When firmware encounters an error, STATUS_FW_ERROR is set but the
mac80211 TX path continues pulling frames from TXQs. Each frame
fails at iwl_trans_tx() which checks STATUS_FW_ERROR and returns
-EIO, but iwl_mld_tx_from_txq() keeps looping over every queued
frame. This burns CPU in a tight loop on dead firmware and can
cause soft lockups during firmware error recovery.
Add a STATUS_FW_ERROR check at the top of iwl_mld_tx_from_txq()
to stop pulling frames from mac80211 TXQs when firmware is dead.
Also guard iwl_mld_mac80211_tx() which bypasses the TXQ path
entirely and would otherwise continue feeding frames to dead
firmware.
Once STATUS_FW_ERROR is cleared during firmware restart, TX
resumes naturally with no explicit wake needed.
Fixes: d1e879ec600f ("wifi: iwlwifi: add iwlmld sub-driver")
Signed-off-by: Cole Leavitt <cole@unwrap.rs>
---
drivers/net/wireless/intel/iwlwifi/mld/mac80211.c | 4 ++++
drivers/net/wireless/intel/iwlwifi/mld/tx.c | 3 +++
2 files changed, 7 insertions(+)
diff --git a/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
index 71a9a72c9ac0..0df3be3089c3 100644
--- a/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
+++ b/drivers/net/wireless/intel/iwlwifi/mld/mac80211.c
@@ -519,6 +519,10 @@ iwl_mld_mac80211_tx(struct ieee80211_hw *hw,
u32 link_id = u32_get_bits(info->control.flags,
IEEE80211_TX_CTRL_MLO_LINK);
+ if (unlikely(test_bit(STATUS_FW_ERROR, &mld->trans->status))) {
+ ieee80211_free_txskb(hw, skb);
+ return;
+ }
/* In AP mode, mgmt frames are sent on the bcast station,
* so the FW can't translate the MLD addr to the link addr. Do it here
*/
diff --git a/drivers/net/wireless/intel/iwlwifi/mld/tx.c b/drivers/net/wireless/intel/iwlwifi/mld/tx.c
index 8af58aabcd68..33bd2e336166 100644
--- a/drivers/net/wireless/intel/iwlwifi/mld/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/mld/tx.c
@@ -962,6 +962,9 @@ void iwl_mld_tx_from_txq(struct iwl_mld *mld, struct ieee80211_txq *txq)
struct sk_buff *skb = NULL;
u8 zero_addr[ETH_ALEN] = {};
+ if (unlikely(test_bit(STATUS_FW_ERROR, &mld->trans->status)))
+ return;
+
/*
* No need for threads to be pending here, they can leave the first
* taker all the work.
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* RE: [PATCH 2/3] wifi: iwlwifi: mld: fix TSO segmentation explosion when AMSDU is disabled
2026-04-05 5:41 ` [PATCH 2/3] wifi: iwlwifi: mld: fix TSO segmentation explosion when AMSDU is disabled Cole Leavitt
@ 2026-04-12 3:47 ` Korenblit, Miriam Rachel
2026-04-12 14:03 ` Ben Greear
0 siblings, 1 reply; 7+ messages in thread
From: Korenblit, Miriam Rachel @ 2026-04-12 3:47 UTC (permalink / raw)
To: Cole Leavitt, linux-wireless@vger.kernel.org
Cc: greearb@candelatech.com, johannes@sipsolutions.net
> -----Original Message-----
> From: Cole Leavitt <cole@unwrap.rs>
> Sent: Sunday, April 5, 2026 8:42 AM
> To: linux-wireless@vger.kernel.org
> Cc: greearb@candelatech.com; Korenblit, Miriam Rachel
> <miriam.rachel.korenblit@intel.com>; johannes@sipsolutions.net;
> cole@unwrap.rs
> Subject: [PATCH 2/3] wifi: iwlwifi: mld: fix TSO segmentation explosion when
> AMSDU is disabled
>
> When the TLC notification disables AMSDU for a TID, the MLD driver sets
> max_tid_amsdu_len to the sentinel value 1. The TSO segmentation path in
> iwl_mld_tx_tso_segment() checks for zero but not for this sentinel, allowing it to
> reach the num_subframes calculation:
>
> num_subframes = (max_tid_amsdu_len + pad) / (subf_len + pad)
> = (1 + 2) / (1534 + 2) = 0
>
> This zero propagates to iwl_tx_tso_segment() which sets:
>
> gso_size = num_subframes * mss = 0
>
> Calling skb_gso_segment() with gso_size=0 creates over 32000 tiny segments
> from a single GSO skb. This floods the TX ring with ~1024 micro-frames (the rest
> are purged), creating a massive burst of TX completion events that can lead to
> memory corruption and a subsequent use-after-free in TCP's retransmit queue
> (refcount underflow in tcp_shifted_skb, NULL deref in tcp_rack_detect_loss).
And why not fixing this issue?
>
> The MVM driver is immune because it checks mvmsta->amsdu_enabled before
> reaching the num_subframes calculation. The MLD driver has no equivalent
> bitmap check and relies solely on max_tid_amsdu_len, which does not catch the
> sentinel value.
>
> Fix this by detecting the sentinel value (max_tid_amsdu_len == 1) at the existing
> check and falling back to non-AMSDU TSO segmentation. Also add a
> WARN_ON_ONCE guard after the num_subframes division as defense-in-depth
> to catch any future code paths that produce zero through a different mechanism.
>
> Suggested-by: Miriam Rachel Korenblit <miriam.rachel.korenblit@intel.com>
> Fixes: d1e879ec600f ("wifi: iwlwifi: add iwlmld sub-driver")
> Signed-off-by: Cole Leavitt <cole@unwrap.rs>
> ---
> drivers/net/wireless/intel/iwlwifi/mld/tx.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/wireless/intel/iwlwifi/mld/tx.c
> b/drivers/net/wireless/intel/iwlwifi/mld/tx.c
> index e341d12e5233..8af58aabcd68 100644
> --- a/drivers/net/wireless/intel/iwlwifi/mld/tx.c
> +++ b/drivers/net/wireless/intel/iwlwifi/mld/tx.c
> @@ -823,7 +823,7 @@ static int iwl_mld_tx_tso_segment(struct iwl_mld *mld,
> struct sk_buff *skb,
> return -EINVAL;
>
> max_tid_amsdu_len = sta->cur->max_tid_amsdu_len[tid];
> - if (!max_tid_amsdu_len)
> + if (!max_tid_amsdu_len || max_tid_amsdu_len == 1)
> return iwl_tx_tso_segment(skb, 1, netdev_flags, mpdus_skbs);
>
> /* Sub frame header + SNAP + IP header + TCP header + MSS */ @@ -
> 835,6 +835,9 @@ static int iwl_mld_tx_tso_segment(struct iwl_mld *mld, struct
> sk_buff *skb,
> */
> num_subframes = (max_tid_amsdu_len + pad) / (subf_len + pad);
>
> + if (WARN_ON_ONCE(!num_subframes))
> + return iwl_tx_tso_segment(skb, 1, netdev_flags, mpdus_skbs);
> +
> if (sta->max_amsdu_subframes &&
> num_subframes > sta->max_amsdu_subframes)
> num_subframes = sta->max_amsdu_subframes;
> --
> 2.52.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/3] wifi: iwlwifi: mld: fix TSO segmentation explosion when AMSDU is disabled
2026-04-12 3:47 ` Korenblit, Miriam Rachel
@ 2026-04-12 14:03 ` Ben Greear
0 siblings, 0 replies; 7+ messages in thread
From: Ben Greear @ 2026-04-12 14:03 UTC (permalink / raw)
To: Korenblit, Miriam Rachel, Cole Leavitt,
linux-wireless@vger.kernel.org
Cc: johannes@sipsolutions.net
On 4/11/26 8:47 PM, Korenblit, Miriam Rachel wrote:
>
>
>> -----Original Message-----
>> From: Cole Leavitt <cole@unwrap.rs>
>> Sent: Sunday, April 5, 2026 8:42 AM
>> To: linux-wireless@vger.kernel.org
>> Cc: greearb@candelatech.com; Korenblit, Miriam Rachel
>> <miriam.rachel.korenblit@intel.com>; johannes@sipsolutions.net;
>> cole@unwrap.rs
>> Subject: [PATCH 2/3] wifi: iwlwifi: mld: fix TSO segmentation explosion when
>> AMSDU is disabled
>>
>> When the TLC notification disables AMSDU for a TID, the MLD driver sets
>> max_tid_amsdu_len to the sentinel value 1. The TSO segmentation path in
>> iwl_mld_tx_tso_segment() checks for zero but not for this sentinel, allowing it to
>> reach the num_subframes calculation:
>>
>> num_subframes = (max_tid_amsdu_len + pad) / (subf_len + pad)
>> = (1 + 2) / (1534 + 2) = 0
>>
>> This zero propagates to iwl_tx_tso_segment() which sets:
>>
>> gso_size = num_subframes * mss = 0
>>
>> Calling skb_gso_segment() with gso_size=0 creates over 32000 tiny segments
>> from a single GSO skb. This floods the TX ring with ~1024 micro-frames (the rest
>> are purged), creating a massive burst of TX completion events that can lead to
>> memory corruption and a subsequent use-after-free in TCP's retransmit queue
>> (refcount underflow in tcp_shifted_skb, NULL deref in tcp_rack_detect_loss).
>
> And why not fixing this issue?
We have been running with this patch. It doesn't seem to cause harm,
but also, we still saw at least a few warnings about 32k spins in GSO logic.
I am pretty sure the 32k GSO spin we see is due to some skb memory corruption of some kind,
and I don't know root cause.
Maybe this patch is still worth having, but the description about lots of tx completion events
causing mem corruption seems unfounded to me, and while I have no proof, this sort of
over-confidence in cause vs affect appears similar to some other AI generated patches I've seen.
Thanks,
Ben
--
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc http://www.candelatech.com
^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware error
2026-04-05 5:41 [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware error Cole Leavitt
` (2 preceding siblings ...)
2026-04-05 5:41 ` [PATCH 3/3] wifi: iwlwifi: mld: skip TX when firmware is dead Cole Leavitt
@ 2026-04-14 3:51 ` Korenblit, Miriam Rachel
3 siblings, 0 replies; 7+ messages in thread
From: Korenblit, Miriam Rachel @ 2026-04-14 3:51 UTC (permalink / raw)
To: Cole Leavitt, linux-wireless@vger.kernel.org
Cc: greearb@candelatech.com, johannes@sipsolutions.net
> -----Original Message-----
> From: Cole Leavitt <cole@unwrap.rs>
> Sent: Sunday, April 5, 2026 8:42 AM
> To: linux-wireless@vger.kernel.org
> Cc: greearb@candelatech.com; Korenblit, Miriam Rachel
> <miriam.rachel.korenblit@intel.com>; johannes@sipsolutions.net;
> cole@unwrap.rs
> Subject: [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware
> error
>
> Three fixes for the iwlmld sub-driver addressing use-after-free, TSO segmentation
> explosion, and soft lockups during firmware error recovery on Intel BE200 (WiFi7).
>
> 1/3 closes the NAPI race window where stale RX data from dying firmware
> reaches the TX completion handlers, causing corrupt SSN values to trigger skb
> use-after-free in iwl_trans_reclaim().
> Ben Greear confirmed the WARN_ONCE fires on his test systems.
>
> 2/3 fixes the TSO segmentation explosion when AMSDU is disabled for a TID. The
> TLC notification sets max_tid_amsdu_len to the sentinel value 1, which slips past
> the existing zero check and produces num_subframes=0, causing
> skb_gso_segment() to create
> 32000+ tiny segments. Revised per Miriam Korenblit's feedback to
> check for the sentinel value directly and add a WARN_ON_ONCE guard after the
> division as defense-in-depth.
>
> 3/3 adds STATUS_FW_ERROR checks in the TX pull path to stop feeding frames to
> dead firmware. Revised per Johannes Berg's feedback to use status bit checks
> instead of stop_queues/wake_queues, which doesn't interact well with TXQ-
> based APIs.
Was the soft lockup happening after (as a consequence?) of the bug that you fixed in 2/3?
I am wondering why we didn't see it in mvm driver. I don't think we have such a guard there
>
> Changes since v1:
> - 1/3: Added Tested-by from Ben Greear
> - 2/3: Check max_tid_amsdu_len == 1 (sentinel) instead of
> guarding !num_subframes after division; added WARN_ON_ONCE
> defense-in-depth (Suggested-by: Miriam Korenblit)
> - 3/3: Replaced ieee80211_stop_queues()/wake_queues() with
> STATUS_FW_ERROR checks in TX pull path (per Johannes Berg)
>
> Cole Leavitt (3):
> wifi: iwlwifi: prevent NAPI processing after firmware error
> wifi: iwlwifi: mld: fix TSO segmentation explosion when AMSDU is
> disabled
> wifi: iwlwifi: mld: skip TX when firmware is dead
>
> .../net/wireless/intel/iwlwifi/mld/mac80211.c | 4 +
> drivers/net/wireless/intel/iwlwifi/mld/tx.c | 210 +++++------
> .../wireless/intel/iwlwifi/pcie/gen1_2/rx.c | 337 +++++++++---------
> 3 files changed, 284 insertions(+), 267 deletions(-)
>
>
> base-commit: 3aae9383f42f687221c011d7ee87529398e826b3
> --
> 2.52.0
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-04-14 3:52 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-05 5:41 [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware error Cole Leavitt
2026-04-05 5:41 ` [PATCH 1/3] wifi: iwlwifi: prevent NAPI processing after " Cole Leavitt
2026-04-05 5:41 ` [PATCH 2/3] wifi: iwlwifi: mld: fix TSO segmentation explosion when AMSDU is disabled Cole Leavitt
2026-04-12 3:47 ` Korenblit, Miriam Rachel
2026-04-12 14:03 ` Ben Greear
2026-04-05 5:41 ` [PATCH 3/3] wifi: iwlwifi: mld: skip TX when firmware is dead Cole Leavitt
2026-04-14 3:51 ` [PATCH v2 0/3] wifi: iwlwifi: mld: fix UAF and soft lockup on firmware error Korenblit, Miriam Rachel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox