> This is an AI-generated review of your patch. The human sending this > email says "For your consideration." > --- > net: airoha: Move airoha_qdma pointer in airoha_gdm_dev struct > > This commit moves the airoha_qdma pointer from the airoha_gdm_port struct > to the airoha_gdm_dev struct. This ensures the QDMA block used depends on > the specific net_device WAN/LAN configuration. > > [ ... ] > > > --- a/drivers/net/ethernet/airoha/airoha_eth.c > > +++ b/drivers/net/ethernet/airoha/airoha_eth.c > > [ ... ] > > > @@ -1497,9 +1498,10 @@ static void airoha_qdma_stop_napi(struct airoha_qdma *qdma) > > } > > } > > > > -static void airoha_update_hw_stats(struct airoha_gdm_port *port) > > +static void airoha_update_hw_stats(struct airoha_gdm_dev *dev) > > { > > - struct airoha_eth *eth = port->qdma->eth; > > + struct airoha_gdm_port *port = dev->port; > > + struct airoha_eth *eth = dev->eth; > > u32 val, i = 0; > > > > spin_lock(&port->stats.lock); > > This isn't a bug introduced by this patch, but does this spin_lock() > carry a deadlock risk? > > Because ndo_get_stats64() can be called from atomic context (e.g., from a > softirq), if a process context caller holds port->stats.lock and is > interrupted by a softirq on the same CPU that also triggers dev_get_stats(), > the softirq could spin forever waiting for the lock. > > Should this be upgraded to spin_lock_bh() to ensure local softirqs are > disabled during the critical section? I was not able to spot any point in the networking stack where ndo_get_stats64 callback is run from a softirq. Moreover, airoha_update_hw_stats() is never executed in any napi in airoha_eth driver. It seems to me spin_lock() is enough here. Am I missing something? > > [ ... ] > > > --- a/drivers/net/ethernet/airoha/airoha_eth.h > > +++ b/drivers/net/ethernet/airoha/airoha_eth.h > > @@ -534,12 +534,12 @@ struct airoha_qdma { > > > > struct airoha_gdm_dev { > > struct airoha_gdm_port *port; > > + struct airoha_qdma *qdma; > > struct net_device *dev; > > struct airoha_eth *eth; > > }; > > > > struct airoha_gdm_port { > > - struct airoha_qdma *qdma; > > struct airoha_gdm_dev *dev; > > int id; > > This isn't a bug introduced by this patch, but since the qdma pointer > is being moved to better align with the hardware structure, should > qos_sq_bmap, cpu_tx_packets, and fwd_tx_packets be moved to > struct airoha_qdma as well? > > Because QDMA1 is shared by all WAN ports (GDM 2, 3, 4) and qos_sq_bmap > is currently tracked per-port instead of per-QDMA, each WAN port > independently allocates QDMA1 channels starting from index 0. Does this > cause different ports to overwrite each other's hardware QoS configurations > in the shared QDMA1 block? > > Furthermore, cpu_tx_packets and fwd_tx_packets cache per-channel ETS > hardware counters but are single u64 scalar variables in the port structure. > When querying stats for multiple channels, do these scalar caches get > overwritten with values from completely different channels, resulting in > incorrect delta calculations? ack, I will fix it in v4. Regards, Lorenzo