* [PATCH v1 0/2] Update Rx Timestamp in IAVF PMD @ 2026-04-02 15:21 Soumyadeep Hore 2026-04-02 15:21 ` [PATCH v1 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore 2026-04-02 15:21 ` [PATCH v1 " Soumyadeep Hore 0 siblings, 2 replies; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-02 15:21 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar PHC Polling from Rx Datapath is removed and existing alarm handlers are used to fix latency issues in IAVF PMD. Soumyadeep Hore (2): net/iavf: remove PHC polling from Rx datapath net/iavf: reuse device alarm for PHC sync drivers/net/intel/iavf/iavf.h | 5 ++ drivers/net/intel/iavf/iavf_ethdev.c | 70 +++++++++++++++++++ drivers/net/intel/iavf/iavf_rxtx.c | 34 --------- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 16 +---- drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 16 +---- drivers/net/intel/iavf/iavf_vchnl.c | 4 ++ 6 files changed, 83 insertions(+), 62 deletions(-) -- 2.47.1 ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v1 1/2] net/iavf: remove PHC polling from Rx datapath 2026-04-02 15:21 [PATCH v1 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore @ 2026-04-02 15:21 ` Soumyadeep Hore 2026-04-02 15:46 ` [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-02 15:48 ` [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-02 15:21 ` [PATCH v1 " Soumyadeep Hore 1 sibling, 2 replies; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-02 15:21 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar Remove periodic PHC read/update checks from scalar and vector flex RX paths, keeping timestamp conversion based on queue PHC state. This avoids hot-path PHC polling overhead and preserves the latency fix for RX timestamp-enabled traffic. Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com> --- drivers/net/intel/iavf/iavf_rxtx.c | 34 ------------------- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 16 ++------- drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 16 ++------- 3 files changed, 4 insertions(+), 62 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index e621d4bf47..76615f39e8 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -1507,16 +1507,6 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, rx_ring = rxq->rx_flex_ring; ptype_tbl = rxq->iavf_vsi->adapter->ptype_tbl; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); @@ -1585,7 +1575,6 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(rxm, iavf_timestamp_dynfield_offset, @@ -1627,16 +1616,6 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, volatile union ci_rx_flex_desc *rxdp; const uint32_t *ptype_tbl = rxq->iavf_vsi->adapter->ptype_tbl; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); @@ -1755,7 +1734,6 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(first_seg, iavf_timestamp_dynfield_offset, @@ -1969,16 +1947,6 @@ iavf_rx_scan_hw_ring_flex_rxd(struct ci_rx_queue *rxq, if (!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) return 0; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - /* Scan LOOK_AHEAD descriptors at a time to determine which * descriptors reference packets that are ready to be received. */ @@ -2041,8 +2009,6 @@ iavf_rx_scan_hw_ring_flex_rxd(struct ci_rx_queue *rxq, rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / - (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(mb, iavf_timestamp_dynfield_offset, diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c index 2e18be3616..a688ad4230 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c @@ -514,18 +514,10 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, if (!(rxdp->wb.status_error0 & rte_cpu_to_le_32(1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) return 0; - bool is_tsinit = false; uint8_t inflection_point = 0; __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) { - hw_low_last = _mm256_setzero_si256(); - is_tsinit = 1; - } else { - hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); - } + hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); } /* constants used in processing loop */ @@ -1152,10 +1144,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, *RTE_MBUF_DYNFIELD(rx_pkts[i + 7], iavf_timestamp_dynfield_offset, uint32_t *) = _mm256_extract_epi32(ts_low1, 7); - if (unlikely(is_tsinit)) { + { uint32_t in_timestamp; - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); in_timestamp = *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], iavf_timestamp_dynfield_offset, uint32_t *); rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, in_timestamp); @@ -1388,8 +1378,6 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, PMD_DRV_LOG(ERR, "invalid inflection point for rx timestamp"); break; } - - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); } if (burst != IAVF_VPMD_DESCS_PER_LOOP_WIDE) break; diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c index 9a93a0b062..7fc3ba8956 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c @@ -615,18 +615,10 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, #ifdef IAVF_RX_TS_OFFLOAD uint8_t inflection_point = 0; - bool is_tsinit = false; __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) { - hw_low_last = _mm256_setzero_si256(); - is_tsinit = 1; - } else { - hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); - } + hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); } #endif @@ -1343,11 +1335,9 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, *RTE_MBUF_DYNFIELD(rx_pkts[i + 7], iavf_timestamp_dynfield_offset, uint32_t *) = _mm256_extract_epi32(ts_low1, 7); - if (unlikely(is_tsinit)) { + { uint32_t in_timestamp; - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); in_timestamp = *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], iavf_timestamp_dynfield_offset, uint32_t *); rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, in_timestamp); @@ -1584,8 +1574,6 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, PMD_DRV_LOG(ERR, "invalid inflection point for rx timestamp"); break; } - - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); } #endif if (burst != IAVF_VPMD_DESCS_PER_LOOP_WIDE) -- 2.47.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD 2026-04-02 15:21 ` [PATCH v1 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore @ 2026-04-02 15:46 ` Soumyadeep Hore 2026-04-02 15:46 ` [PATCH v2 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore 2026-04-02 15:46 ` [PATCH v2 " Soumyadeep Hore 2026-04-02 15:48 ` [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 1 sibling, 2 replies; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-02 15:46 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar PHC Polling from Rx Datapath is removed and existing alarm handlers are used to fix latency issues in IAVF PMD. --- v2: - Fixed patch apply issues --- Soumyadeep Hore (2): net/iavf: remove PHC polling from Rx datapath net/iavf: reuse device alarm for PHC sync drivers/net/intel/iavf/iavf.h | 5 ++ drivers/net/intel/iavf/iavf_ethdev.c | 70 +++++++++++++++++++ drivers/net/intel/iavf/iavf_rxtx.c | 34 --------- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 16 +---- drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 16 +---- drivers/net/intel/iavf/iavf_vchnl.c | 4 ++ 6 files changed, 83 insertions(+), 62 deletions(-) -- 2.47.1 ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v2 1/2] net/iavf: remove PHC polling from Rx datapath 2026-04-02 15:46 ` [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore @ 2026-04-02 15:46 ` Soumyadeep Hore 2026-04-06 21:22 ` [PATCH v3 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-02 15:46 ` [PATCH v2 " Soumyadeep Hore 1 sibling, 1 reply; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-02 15:46 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar Remove periodic PHC read/update checks from scalar and vector flex RX paths, keeping timestamp conversion based on queue PHC state. This avoids hot-path PHC polling overhead and preserves the latency fix for RX timestamp-enabled traffic. Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com> --- drivers/net/intel/iavf/iavf_rxtx.c | 34 ------------------- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 16 ++------- drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 16 ++------- 3 files changed, 4 insertions(+), 62 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index 4ff6c18dc4..fabccc89bf 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -1507,16 +1507,6 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, rx_ring = rxq->rx_flex_ring; ptype_tbl = rxq->iavf_vsi->adapter->ptype_tbl; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); @@ -1585,7 +1575,6 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(rxm, iavf_timestamp_dynfield_offset, @@ -1627,16 +1616,6 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, volatile union ci_rx_flex_desc *rxdp; const uint32_t *ptype_tbl = rxq->iavf_vsi->adapter->ptype_tbl; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); @@ -1755,7 +1734,6 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(first_seg, iavf_timestamp_dynfield_offset, @@ -1969,16 +1947,6 @@ iavf_rx_scan_hw_ring_flex_rxd(struct ci_rx_queue *rxq, if (!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) return 0; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - /* Scan LOOK_AHEAD descriptors at a time to determine which * descriptors reference packets that are ready to be received. */ @@ -2041,8 +2009,6 @@ iavf_rx_scan_hw_ring_flex_rxd(struct ci_rx_queue *rxq, rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / - (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(mb, iavf_timestamp_dynfield_offset, diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c index db0462f0f5..9349646d55 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c @@ -514,18 +514,10 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, if (!(rxdp->wb.status_error0 & rte_cpu_to_le_32(1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) return 0; - bool is_tsinit = false; uint8_t inflection_point = 0; __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) { - hw_low_last = _mm256_setzero_si256(); - is_tsinit = 1; - } else { - hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); - } + hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); } /* constants used in processing loop */ @@ -1152,10 +1144,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, *RTE_MBUF_DYNFIELD(rx_pkts[i + 7], iavf_timestamp_dynfield_offset, uint32_t *) = _mm256_extract_epi32(ts_low1, 7); - if (unlikely(is_tsinit)) { + { uint32_t in_timestamp; - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); in_timestamp = *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], iavf_timestamp_dynfield_offset, uint32_t *); rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, in_timestamp); @@ -1388,8 +1378,6 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, PMD_DRV_LOG(ERR, "invalid inflection point for rx timestamp"); break; } - - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); } if (burst != IAVF_VPMD_DESCS_PER_LOOP_WIDE) break; diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c index 4e8bf94fa0..1bb3e9746b 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c @@ -615,18 +615,10 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, #ifdef IAVF_RX_TS_OFFLOAD uint8_t inflection_point = 0; - bool is_tsinit = false; __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) { - hw_low_last = _mm256_setzero_si256(); - is_tsinit = 1; - } else { - hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); - } + hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); } #endif @@ -1343,11 +1335,9 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, *RTE_MBUF_DYNFIELD(rx_pkts[i + 7], iavf_timestamp_dynfield_offset, uint32_t *) = _mm256_extract_epi32(ts_low1, 7); - if (unlikely(is_tsinit)) { + { uint32_t in_timestamp; - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); in_timestamp = *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], iavf_timestamp_dynfield_offset, uint32_t *); rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, in_timestamp); @@ -1584,8 +1574,6 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, PMD_DRV_LOG(ERR, "invalid inflection point for rx timestamp"); break; } - - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); } #endif if (burst != IAVF_VPMD_DESCS_PER_LOOP_WIDE) -- 2.47.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v3 0/2] Update Rx Timestamp in IAVF PMD 2026-04-02 15:46 ` [PATCH v2 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore @ 2026-04-06 21:22 ` Soumyadeep Hore 2026-04-06 21:22 ` [PATCH v3 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore 2026-04-06 21:22 ` [PATCH v3 2/2] net/iavf: reuse device alarm for PHC sync Soumyadeep Hore 0 siblings, 2 replies; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-06 21:22 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar PHC Polling from Rx Datapath is removed and existing alarm handlers are used to fix latency issues in IAVF PMD. --- v3: - Addressed AI code reviews --- v2: - Fixed patch apply issues --- Soumyadeep Hore (2): net/iavf: remove PHC polling from Rx datapath net/iavf: reuse device alarm for PHC sync drivers/net/intel/iavf/iavf.h | 6 + drivers/net/intel/iavf/iavf_ethdev.c | 128 ++++++++++++++++++ drivers/net/intel/iavf/iavf_rxtx.c | 34 ----- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 24 +--- drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 26 +--- drivers/net/intel/iavf/iavf_vchnl.c | 4 + 6 files changed, 144 insertions(+), 78 deletions(-) -- 2.47.1 ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v3 1/2] net/iavf: remove PHC polling from Rx datapath 2026-04-06 21:22 ` [PATCH v3 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore @ 2026-04-06 21:22 ` Soumyadeep Hore 2026-04-08 16:27 ` Bruce Richardson 2026-04-06 21:22 ` [PATCH v3 2/2] net/iavf: reuse device alarm for PHC sync Soumyadeep Hore 1 sibling, 1 reply; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-06 21:22 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar Remove periodic PHC read/update checks from scalar and vector flex RX paths, keeping timestamp conversion based on queue PHC state. This avoids hot-path PHC polling overhead and preserves the latency fix for RX timestamp-enabled traffic. Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com> --- drivers/net/intel/iavf/iavf_rxtx.c | 34 ------------------- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 24 ++----------- drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 26 ++------------ 3 files changed, 6 insertions(+), 78 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index 4ff6c18dc4..fabccc89bf 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -1507,16 +1507,6 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, rx_ring = rxq->rx_flex_ring; ptype_tbl = rxq->iavf_vsi->adapter->ptype_tbl; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); @@ -1585,7 +1575,6 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(rxm, iavf_timestamp_dynfield_offset, @@ -1627,16 +1616,6 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, volatile union ci_rx_flex_desc *rxdp; const uint32_t *ptype_tbl = rxq->iavf_vsi->adapter->ptype_tbl; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); @@ -1755,7 +1734,6 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(first_seg, iavf_timestamp_dynfield_offset, @@ -1969,16 +1947,6 @@ iavf_rx_scan_hw_ring_flex_rxd(struct ci_rx_queue *rxq, if (!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) return 0; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - /* Scan LOOK_AHEAD descriptors at a time to determine which * descriptors reference packets that are ready to be received. */ @@ -2041,8 +2009,6 @@ iavf_rx_scan_hw_ring_flex_rxd(struct ci_rx_queue *rxq, rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / - (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(mb, iavf_timestamp_dynfield_offset, diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c index db0462f0f5..c91123ead4 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c @@ -514,19 +514,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, if (!(rxdp->wb.status_error0 & rte_cpu_to_le_32(1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) return 0; - bool is_tsinit = false; uint8_t inflection_point = 0; __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) { - hw_low_last = _mm256_setzero_si256(); - is_tsinit = 1; - } else { - hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); - } - } /* constants used in processing loop */ const __m256i crc_adjust = @@ -1152,14 +1141,9 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, *RTE_MBUF_DYNFIELD(rx_pkts[i + 7], iavf_timestamp_dynfield_offset, uint32_t *) = _mm256_extract_epi32(ts_low1, 7); - if (unlikely(is_tsinit)) { - uint32_t in_timestamp; - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - in_timestamp = *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], - iavf_timestamp_dynfield_offset, uint32_t *); - rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, in_timestamp); - } + rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, + *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], + iavf_timestamp_dynfield_offset, uint32_t *)); *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], iavf_timestamp_dynfield_offset + 4, uint32_t *) = (uint32_t)(rxq->phc_time >> 32); @@ -1388,8 +1372,6 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, PMD_DRV_LOG(ERR, "invalid inflection point for rx timestamp"); break; } - - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); } if (burst != IAVF_VPMD_DESCS_PER_LOOP_WIDE) break; diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c index 4e8bf94fa0..a7c0a02eba 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c @@ -615,19 +615,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, #ifdef IAVF_RX_TS_OFFLOAD uint8_t inflection_point = 0; - bool is_tsinit = false; __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); - - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) { - hw_low_last = _mm256_setzero_si256(); - is_tsinit = 1; - } else { - hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); - } - } #endif /* constants used in processing loop */ @@ -1343,15 +1331,9 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, *RTE_MBUF_DYNFIELD(rx_pkts[i + 7], iavf_timestamp_dynfield_offset, uint32_t *) = _mm256_extract_epi32(ts_low1, 7); - if (unlikely(is_tsinit)) { - uint32_t in_timestamp; - - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - in_timestamp = *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], - iavf_timestamp_dynfield_offset, uint32_t *); - rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, in_timestamp); - } + rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, + *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], + iavf_timestamp_dynfield_offset, uint32_t *)); *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], iavf_timestamp_dynfield_offset + 4, uint32_t *) = (uint32_t)(rxq->phc_time >> 32); @@ -1584,8 +1566,6 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, PMD_DRV_LOG(ERR, "invalid inflection point for rx timestamp"); break; } - - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); } #endif if (burst != IAVF_VPMD_DESCS_PER_LOOP_WIDE) -- 2.47.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v3 1/2] net/iavf: remove PHC polling from Rx datapath 2026-04-06 21:22 ` [PATCH v3 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore @ 2026-04-08 16:27 ` Bruce Richardson 0 siblings, 0 replies; 13+ messages in thread From: Bruce Richardson @ 2026-04-08 16:27 UTC (permalink / raw) To: Soumyadeep Hore; +Cc: manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar On Mon, Apr 06, 2026 at 05:22:07PM -0400, Soumyadeep Hore wrote: > Remove periodic PHC read/update checks from scalar and vector flex > RX paths, keeping timestamp conversion based on queue PHC state. > > This avoids hot-path PHC polling overhead and preserves the latency > fix for RX timestamp-enabled traffic. > > Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com> > --- > drivers/net/intel/iavf/iavf_rxtx.c | 34 ------------------- > drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 24 ++----------- > drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 26 ++------------ > 3 files changed, 6 insertions(+), 78 deletions(-) > With all the code deletions, does the feature still work after this patch? If not, I will probably need to squash patches 1 & 2 together on apply so that the feature is not broken in the middle of the set. Also, patches are missing fixes lines and the reference to the relevant bugzilla [1]. Thanks, /Bruce [1] https://bugs.dpdk.org/show_bug.cgi?id=1898 > diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c > index 4ff6c18dc4..fabccc89bf 100644 > --- a/drivers/net/intel/iavf/iavf_rxtx.c > +++ b/drivers/net/intel/iavf/iavf_rxtx.c > @@ -1507,16 +1507,6 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, > rx_ring = rxq->rx_flex_ring; > ptype_tbl = rxq->iavf_vsi->adapter->ptype_tbl; > > - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { > - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); > - > - if (sw_cur_time - rxq->hw_time_update > 4) { > - if (iavf_get_phc_time(rxq)) > - PMD_DRV_LOG(ERR, "get physical time failed"); > - rxq->hw_time_update = sw_cur_time; > - } > - } > - > while (nb_rx < nb_pkts) { > rxdp = &rx_ring[rx_id]; > rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); > @@ -1585,7 +1575,6 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, > rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); > > rxq->phc_time = ts_ns; > - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); > > *RTE_MBUF_DYNFIELD(rxm, > iavf_timestamp_dynfield_offset, > @@ -1627,16 +1616,6 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, > volatile union ci_rx_flex_desc *rxdp; > const uint32_t *ptype_tbl = rxq->iavf_vsi->adapter->ptype_tbl; > > - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { > - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); > - > - if (sw_cur_time - rxq->hw_time_update > 4) { > - if (iavf_get_phc_time(rxq)) > - PMD_DRV_LOG(ERR, "get physical time failed"); > - rxq->hw_time_update = sw_cur_time; > - } > - } > - > while (nb_rx < nb_pkts) { > rxdp = &rx_ring[rx_id]; > rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); > @@ -1755,7 +1734,6 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, > rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); > > rxq->phc_time = ts_ns; > - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); > > *RTE_MBUF_DYNFIELD(first_seg, > iavf_timestamp_dynfield_offset, > @@ -1969,16 +1947,6 @@ iavf_rx_scan_hw_ring_flex_rxd(struct ci_rx_queue *rxq, > if (!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) > return 0; > > - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { > - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); > - > - if (sw_cur_time - rxq->hw_time_update > 4) { > - if (iavf_get_phc_time(rxq)) > - PMD_DRV_LOG(ERR, "get physical time failed"); > - rxq->hw_time_update = sw_cur_time; > - } > - } > - > /* Scan LOOK_AHEAD descriptors at a time to determine which > * descriptors reference packets that are ready to be received. > */ > @@ -2041,8 +2009,6 @@ iavf_rx_scan_hw_ring_flex_rxd(struct ci_rx_queue *rxq, > rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high)); > > rxq->phc_time = ts_ns; > - rxq->hw_time_update = rte_get_timer_cycles() / > - (rte_get_timer_hz() / 1000); > > *RTE_MBUF_DYNFIELD(mb, > iavf_timestamp_dynfield_offset, > diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c > index db0462f0f5..c91123ead4 100644 > --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c > +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c > @@ -514,19 +514,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, > if (!(rxdp->wb.status_error0 & > rte_cpu_to_le_32(1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) > return 0; > - bool is_tsinit = false; > uint8_t inflection_point = 0; > __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); > - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { > - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); > - > - if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) { > - hw_low_last = _mm256_setzero_si256(); > - is_tsinit = 1; > - } else { > - hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); > - } > - } > > /* constants used in processing loop */ > const __m256i crc_adjust = > @@ -1152,14 +1141,9 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, > *RTE_MBUF_DYNFIELD(rx_pkts[i + 7], > iavf_timestamp_dynfield_offset, uint32_t *) = _mm256_extract_epi32(ts_low1, 7); > > - if (unlikely(is_tsinit)) { > - uint32_t in_timestamp; > - if (iavf_get_phc_time(rxq)) > - PMD_DRV_LOG(ERR, "get physical time failed"); > - in_timestamp = *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], > - iavf_timestamp_dynfield_offset, uint32_t *); > - rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, in_timestamp); > - } > + rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, > + *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], > + iavf_timestamp_dynfield_offset, uint32_t *)); > > *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], > iavf_timestamp_dynfield_offset + 4, uint32_t *) = (uint32_t)(rxq->phc_time >> 32); > @@ -1388,8 +1372,6 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, > PMD_DRV_LOG(ERR, "invalid inflection point for rx timestamp"); > break; > } > - > - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); > } > if (burst != IAVF_VPMD_DESCS_PER_LOOP_WIDE) > break; > diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c > index 4e8bf94fa0..a7c0a02eba 100644 > --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c > +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c > @@ -615,19 +615,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, > > #ifdef IAVF_RX_TS_OFFLOAD > uint8_t inflection_point = 0; > - bool is_tsinit = false; > __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); > - > - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { > - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); > - > - if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) { > - hw_low_last = _mm256_setzero_si256(); > - is_tsinit = 1; > - } else { > - hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); > - } > - } > #endif > > /* constants used in processing loop */ > @@ -1343,15 +1331,9 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, > *RTE_MBUF_DYNFIELD(rx_pkts[i + 7], > iavf_timestamp_dynfield_offset, uint32_t *) = _mm256_extract_epi32(ts_low1, 7); > > - if (unlikely(is_tsinit)) { > - uint32_t in_timestamp; > - > - if (iavf_get_phc_time(rxq)) > - PMD_DRV_LOG(ERR, "get physical time failed"); > - in_timestamp = *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], > - iavf_timestamp_dynfield_offset, uint32_t *); > - rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, in_timestamp); > - } > + rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, > + *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], > + iavf_timestamp_dynfield_offset, uint32_t *)); > > *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], > iavf_timestamp_dynfield_offset + 4, uint32_t *) = (uint32_t)(rxq->phc_time >> 32); > @@ -1584,8 +1566,6 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, > PMD_DRV_LOG(ERR, "invalid inflection point for rx timestamp"); > break; > } > - > - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); > } > #endif > if (burst != IAVF_VPMD_DESCS_PER_LOOP_WIDE) > -- > 2.47.1 > ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v3 2/2] net/iavf: reuse device alarm for PHC sync 2026-04-06 21:22 ` [PATCH v3 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-06 21:22 ` [PATCH v3 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore @ 2026-04-06 21:22 ` Soumyadeep Hore 1 sibling, 0 replies; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-06 21:22 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar Reuse existing iavf device alarm cadence to drive periodic PHC sync instead of a dedicated PHC alarm callback. Keep PHC start/stop hooks as pause/resume controls around queue reconfiguration and device lifecycle paths. Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com> --- drivers/net/intel/iavf/iavf.h | 6 ++ drivers/net/intel/iavf/iavf_ethdev.c | 128 +++++++++++++++++++++++++++ drivers/net/intel/iavf/iavf_vchnl.c | 4 + 3 files changed, 138 insertions(+) diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index 403c61e2e8..e30fd710f0 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -76,6 +76,7 @@ #define IAVF_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ #define IAVF_ALARM_INTERVAL 50000 /* us */ +#define IAVF_PHC_SYNC_ALARM_INTERVAL_US 200000 /* The overhead from MTU to max frame size. * Considering QinQ packet, the VLAN tag needs to be counted twice. @@ -383,6 +384,9 @@ struct iavf_adapter { enum iavf_rx_func_type rx_func_type; enum iavf_tx_func_type tx_func_type; uint16_t fdir_ref_cnt; + rte_spinlock_t phc_sync_lock; + uint16_t phc_sync_ticks; + bool phc_sync_paused; struct iavf_devargs devargs; bool mac_primary_set; }; @@ -517,6 +521,8 @@ void iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add); int iavf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete); void iavf_dev_alarm_handler(void *param); +void iavf_phc_sync_alarm_start(struct rte_eth_dev *dev); +void iavf_phc_sync_alarm_stop(struct rte_eth_dev *dev); int iavf_query_stats(struct iavf_adapter *adapter, struct virtchnl_eth_stats **pstats); int iavf_config_promisc(struct iavf_adapter *adapter, bool enable_unicast, diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c index 1eca20bc9a..9c9a5a6b47 100644 --- a/drivers/net/intel/iavf/iavf_ethdev.c +++ b/drivers/net/intel/iavf/iavf_ethdev.c @@ -21,6 +21,7 @@ #include <rte_pci.h> #include <rte_alarm.h> #include <rte_atomic.h> +#include <rte_cycles.h> #include <rte_eal.h> #include <rte_ether.h> #include <ethdev_driver.h> @@ -145,6 +146,11 @@ static int iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); static void iavf_dev_interrupt_handler(void *param); static void iavf_disable_irq0(struct iavf_hw *hw); +static struct ci_rx_queue *iavf_phc_sync_rxq_get(struct rte_eth_dev *dev); +static void iavf_phc_sync_update_all_rxq(struct rte_eth_dev *dev, + uint64_t phc_time, + uint64_t sw_cur_time); +static bool iavf_phc_sync_alarm_needed(struct rte_eth_dev *dev); static int iavf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); static int iavf_set_mc_addr_list(struct rte_eth_dev *dev, @@ -1056,6 +1062,8 @@ iavf_dev_start(struct rte_eth_dev *dev) goto error; } + iavf_phc_sync_alarm_start(dev); + return 0; error: @@ -1082,6 +1090,8 @@ iavf_dev_stop(struct rte_eth_dev *dev) if (adapter->stopped == 1) return 0; + iavf_phc_sync_alarm_stop(dev); + /* Disable the interrupt for Rx */ rte_intr_efd_disable(intr_handle); /* Rx interrupt vector mapping free */ @@ -2705,9 +2715,13 @@ void iavf_dev_alarm_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct iavf_adapter *adapter; + const uint16_t phc_sync_ticks_max = RTE_MAX((uint16_t)1, + (uint16_t)(IAVF_PHC_SYNC_ALARM_INTERVAL_US / IAVF_ALARM_INTERVAL)); if (dev == NULL || dev->data == NULL || dev->data->dev_private == NULL) return; + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint32_t icr0; @@ -2723,10 +2737,121 @@ iavf_dev_alarm_handler(void *param) iavf_enable_irq0(hw); + rte_spinlock_lock(&adapter->phc_sync_lock); + if (!adapter->phc_sync_paused && + iavf_phc_sync_alarm_needed(dev)) { + uint16_t phc_sync_ticks = + ++adapter->phc_sync_ticks; + + if (phc_sync_ticks >= phc_sync_ticks_max) { + struct ci_rx_queue *sync_rxq; + uint64_t sw_cur_time; + + adapter->phc_sync_ticks = 0; + sync_rxq = iavf_phc_sync_rxq_get(dev); + if (sync_rxq != NULL && iavf_get_phc_time(sync_rxq) == 0) { + sw_cur_time = rte_get_timer_cycles() / + (rte_get_timer_hz() / 1000); + iavf_phc_sync_update_all_rxq(dev, + sync_rxq->phc_time, sw_cur_time); + } else if (sync_rxq != NULL) { + PMD_DRV_LOG(ERR, "get physical time failed"); + } + } + } else { + adapter->phc_sync_ticks = 0; + } + rte_spinlock_unlock(&adapter->phc_sync_lock); + rte_eal_alarm_set(IAVF_ALARM_INTERVAL, iavf_dev_alarm_handler, dev); } +static struct ci_rx_queue * +iavf_phc_sync_rxq_get(struct rte_eth_dev *dev) +{ + struct ci_rx_queue *rxq; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + if (rxq != NULL) + return rxq; + } + + return NULL; +} + +static void +iavf_phc_sync_update_all_rxq(struct rte_eth_dev *dev, + uint64_t phc_time, + uint64_t sw_cur_time) +{ + struct ci_rx_queue *rxq; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + rxq->phc_time = phc_time; + rxq->hw_time_update = sw_cur_time; + } +} + +static bool +iavf_phc_sync_alarm_needed(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + + if (adapter->closed || adapter->stopped) + return false; + + if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) + return false; + + if (dev->data->nb_rx_queues == 0) + return false; + + if (iavf_phc_sync_rxq_get(dev) == NULL) + return false; + + return true; +} + +void +iavf_phc_sync_alarm_start(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + if (!iavf_phc_sync_alarm_needed(dev)) + return; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + rte_spinlock_lock(&adapter->phc_sync_lock); + adapter->phc_sync_paused = false; + adapter->phc_sync_ticks = 0; + rte_spinlock_unlock(&adapter->phc_sync_lock); +} + +void +iavf_phc_sync_alarm_stop(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + if (dev == NULL || dev->data == NULL || dev->data->dev_private == NULL) + return; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + rte_spinlock_lock(&adapter->phc_sync_lock); + adapter->phc_sync_paused = true; + adapter->phc_sync_ticks = 0; + rte_spinlock_unlock(&adapter->phc_sync_lock); +} + static int iavf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops) @@ -2808,6 +2933,7 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) adapter->dev_data = eth_dev->data; adapter->stopped = 1; adapter->mac_primary_set = false; + rte_spinlock_init(&adapter->phc_sync_lock); if (iavf_dev_event_handler_init()) goto init_vf_err; @@ -2912,6 +3038,7 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, eth_dev); } + iavf_phc_sync_alarm_stop(eth_dev); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; @@ -2986,6 +3113,7 @@ iavf_dev_close(struct rte_eth_dev *dev) } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); } + iavf_phc_sync_alarm_stop(dev); iavf_disable_irq0(hw); if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS) diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c index 08dd6f2d7f..79ef4cec56 100644 --- a/drivers/net/intel/iavf/iavf_vchnl.c +++ b/drivers/net/intel/iavf/iavf_vchnl.c @@ -2133,12 +2133,16 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num) args.out_size = IAVF_AQ_BUF_SZ; if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { + iavf_phc_sync_alarm_stop(dev); err = iavf_execute_vf_cmd_safe(adapter, &args, 0); + iavf_phc_sync_alarm_start(dev); } else { + iavf_phc_sync_alarm_stop(dev); rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); err = iavf_execute_vf_cmd_safe(adapter, &args, 0); rte_eal_alarm_set(IAVF_ALARM_INTERVAL, iavf_dev_alarm_handler, dev); + iavf_phc_sync_alarm_start(dev); } if (err) { -- 2.47.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 2/2] net/iavf: reuse device alarm for PHC sync 2026-04-02 15:46 ` [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-02 15:46 ` [PATCH v2 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore @ 2026-04-02 15:46 ` Soumyadeep Hore 1 sibling, 0 replies; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-02 15:46 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar Reuse existing iavf device alarm cadence to drive periodic PHC sync instead of a dedicated PHC alarm callback. Keep PHC start/stop hooks as pause/resume controls around queue reconfiguration and device lifecycle paths. Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com> --- drivers/net/intel/iavf/iavf.h | 5 ++ drivers/net/intel/iavf/iavf_ethdev.c | 70 ++++++++++++++++++++++++++++ drivers/net/intel/iavf/iavf_vchnl.c | 4 ++ 3 files changed, 79 insertions(+) diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index 403c61e2e8..2f1779d47b 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -76,6 +76,7 @@ #define IAVF_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ #define IAVF_ALARM_INTERVAL 50000 /* us */ +#define IAVF_PHC_SYNC_ALARM_INTERVAL_US 200000 /* The overhead from MTU to max frame size. * Considering QinQ packet, the VLAN tag needs to be counted twice. @@ -383,6 +384,8 @@ struct iavf_adapter { enum iavf_rx_func_type rx_func_type; enum iavf_tx_func_type tx_func_type; uint16_t fdir_ref_cnt; + uint8_t phc_sync_ticks; + bool phc_sync_paused; struct iavf_devargs devargs; bool mac_primary_set; }; @@ -517,6 +520,8 @@ void iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add); int iavf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete); void iavf_dev_alarm_handler(void *param); +void iavf_phc_sync_alarm_start(struct rte_eth_dev *dev); +void iavf_phc_sync_alarm_stop(struct rte_eth_dev *dev); int iavf_query_stats(struct iavf_adapter *adapter, struct virtchnl_eth_stats **pstats); int iavf_config_promisc(struct iavf_adapter *adapter, bool enable_unicast, diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c index 1eca20bc9a..02272d45c1 100644 --- a/drivers/net/intel/iavf/iavf_ethdev.c +++ b/drivers/net/intel/iavf/iavf_ethdev.c @@ -21,6 +21,7 @@ #include <rte_pci.h> #include <rte_alarm.h> #include <rte_atomic.h> +#include <rte_cycles.h> #include <rte_eal.h> #include <rte_ether.h> #include <ethdev_driver.h> @@ -145,6 +146,7 @@ static int iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); static void iavf_dev_interrupt_handler(void *param); static void iavf_disable_irq0(struct iavf_hw *hw); +static bool iavf_phc_sync_alarm_needed(struct rte_eth_dev *dev); static int iavf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); static int iavf_set_mc_addr_list(struct rte_eth_dev *dev, @@ -1056,6 +1058,8 @@ iavf_dev_start(struct rte_eth_dev *dev) goto error; } + iavf_phc_sync_alarm_start(dev); + return 0; error: @@ -1082,6 +1086,8 @@ iavf_dev_stop(struct rte_eth_dev *dev) if (adapter->stopped == 1) return 0; + iavf_phc_sync_alarm_stop(dev); + /* Disable the interrupt for Rx */ rte_intr_efd_disable(intr_handle); /* Rx interrupt vector mapping free */ @@ -2705,9 +2711,11 @@ void iavf_dev_alarm_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct iavf_adapter *adapter; if (dev == NULL || dev->data == NULL || dev->data->dev_private == NULL) return; + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint32_t icr0; @@ -2723,10 +2731,70 @@ iavf_dev_alarm_handler(void *param) iavf_enable_irq0(hw); + if (iavf_phc_sync_alarm_needed(dev) && !adapter->phc_sync_paused) { + adapter->phc_sync_ticks++; + if (adapter->phc_sync_ticks >= + IAVF_PHC_SYNC_ALARM_INTERVAL_US / IAVF_ALARM_INTERVAL) { + struct ci_rx_queue *rxq = dev->data->rx_queues[0]; + + adapter->phc_sync_ticks = 0; + if (iavf_get_phc_time(rxq) == 0) + rxq->hw_time_update = rte_get_timer_cycles() / + (rte_get_timer_hz() / 1000); + } + } else { + adapter->phc_sync_ticks = 0; + } + rte_eal_alarm_set(IAVF_ALARM_INTERVAL, iavf_dev_alarm_handler, dev); } +static bool +iavf_phc_sync_alarm_needed(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + + if (adapter->closed || adapter->stopped) + return false; + + if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) + return false; + + if (dev->data->nb_rx_queues == 0 || dev->data->rx_queues[0] == NULL) + return false; + + return true; +} + +void +iavf_phc_sync_alarm_start(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + if (!iavf_phc_sync_alarm_needed(dev)) + return; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + adapter->phc_sync_paused = false; + adapter->phc_sync_ticks = 0; +} + +void +iavf_phc_sync_alarm_stop(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + if (dev == NULL || dev->data == NULL || dev->data->dev_private == NULL) + return; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + adapter->phc_sync_paused = true; + adapter->phc_sync_ticks = 0; +} + static int iavf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops) @@ -2912,6 +2980,7 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, eth_dev); } + iavf_phc_sync_alarm_stop(eth_dev); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; @@ -2986,6 +3055,7 @@ iavf_dev_close(struct rte_eth_dev *dev) } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); } + iavf_phc_sync_alarm_stop(dev); iavf_disable_irq0(hw); if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS) diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c index 08dd6f2d7f..82943472e1 100644 --- a/drivers/net/intel/iavf/iavf_vchnl.c +++ b/drivers/net/intel/iavf/iavf_vchnl.c @@ -2133,12 +2133,16 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num) args.out_size = IAVF_AQ_BUF_SZ; if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { + iavf_phc_sync_alarm_stop(dev); err = iavf_execute_vf_cmd_safe(adapter, &args, 0); + iavf_phc_sync_alarm_start(dev); } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); + iavf_phc_sync_alarm_stop(dev); err = iavf_execute_vf_cmd_safe(adapter, &args, 0); rte_eal_alarm_set(IAVF_ALARM_INTERVAL, iavf_dev_alarm_handler, dev); + iavf_phc_sync_alarm_start(dev); } if (err) { -- 2.47.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD 2026-04-02 15:21 ` [PATCH v1 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore 2026-04-02 15:46 ` [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore @ 2026-04-02 15:48 ` Soumyadeep Hore 2026-04-02 15:48 ` [PATCH v2 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore 2026-04-02 15:48 ` [PATCH v2 2/2] net/iavf: reuse device alarm for PHC sync Soumyadeep Hore 1 sibling, 2 replies; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-02 15:48 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar PHC Polling from Rx Datapath is removed and existing alarm handlers are used to fix latency issues in IAVF PMD. --- v2: - Fixed patch apply issues --- Soumyadeep Hore (2): net/iavf: remove PHC polling from Rx datapath net/iavf: reuse device alarm for PHC sync drivers/net/intel/iavf/iavf.h | 5 ++ drivers/net/intel/iavf/iavf_ethdev.c | 70 +++++++++++++++++++ drivers/net/intel/iavf/iavf_rxtx.c | 34 --------- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 16 +---- drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 16 +---- drivers/net/intel/iavf/iavf_vchnl.c | 4 ++ 6 files changed, 83 insertions(+), 62 deletions(-) -- 2.47.1 ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v2 1/2] net/iavf: remove PHC polling from Rx datapath 2026-04-02 15:48 ` [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore @ 2026-04-02 15:48 ` Soumyadeep Hore 2026-04-02 15:48 ` [PATCH v2 2/2] net/iavf: reuse device alarm for PHC sync Soumyadeep Hore 1 sibling, 0 replies; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-02 15:48 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar Remove periodic PHC read/update checks from scalar and vector flex RX paths, keeping timestamp conversion based on queue PHC state. This avoids hot-path PHC polling overhead and preserves the latency fix for RX timestamp-enabled traffic. Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com> --- drivers/net/intel/iavf/iavf_rxtx.c | 34 ------------------- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 16 ++------- drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 16 ++------- 3 files changed, 4 insertions(+), 62 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index 4ff6c18dc4..fabccc89bf 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -1507,16 +1507,6 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, rx_ring = rxq->rx_flex_ring; ptype_tbl = rxq->iavf_vsi->adapter->ptype_tbl; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); @@ -1585,7 +1575,6 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(rxm, iavf_timestamp_dynfield_offset, @@ -1627,16 +1616,6 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, volatile union ci_rx_flex_desc *rxdp; const uint32_t *ptype_tbl = rxq->iavf_vsi->adapter->ptype_tbl; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); @@ -1755,7 +1734,6 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(first_seg, iavf_timestamp_dynfield_offset, @@ -1969,16 +1947,6 @@ iavf_rx_scan_hw_ring_flex_rxd(struct ci_rx_queue *rxq, if (!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) return 0; - if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (sw_cur_time - rxq->hw_time_update > 4) { - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); - rxq->hw_time_update = sw_cur_time; - } - } - /* Scan LOOK_AHEAD descriptors at a time to determine which * descriptors reference packets that are ready to be received. */ @@ -2041,8 +2009,6 @@ iavf_rx_scan_hw_ring_flex_rxd(struct ci_rx_queue *rxq, rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high)); rxq->phc_time = ts_ns; - rxq->hw_time_update = rte_get_timer_cycles() / - (rte_get_timer_hz() / 1000); *RTE_MBUF_DYNFIELD(mb, iavf_timestamp_dynfield_offset, diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c index db0462f0f5..9349646d55 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c @@ -514,18 +514,10 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, if (!(rxdp->wb.status_error0 & rte_cpu_to_le_32(1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) return 0; - bool is_tsinit = false; uint8_t inflection_point = 0; __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) { - hw_low_last = _mm256_setzero_si256(); - is_tsinit = 1; - } else { - hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); - } + hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, rxq->phc_time); } /* constants used in processing loop */ @@ -1152,10 +1144,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, *RTE_MBUF_DYNFIELD(rx_pkts[i + 7], iavf_timestamp_dynfield_offset, uint32_t *) = _mm256_extract_epi32(ts_low1, 7); - if (unlikely(is_tsinit)) { + { uint32_t in_timestamp; - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); in_timestamp = *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], iavf_timestamp_dynfield_offset, uint32_t *); rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, in_timestamp); @@ -1388,8 +1378,6 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct ci_rx_queue *rxq, PMD_DRV_LOG(ERR, "invalid inflection point for rx timestamp"); break; } - - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); } if (burst != IAVF_VPMD_DESCS_PER_LOOP_WIDE) break; diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c index 4e8bf94fa0..1bb3e9746b 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c @@ -615,18 +615,10 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, #ifdef IAVF_RX_TS_OFFLOAD uint8_t inflection_point = 0; - bool is_tsinit = false; __m256i hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { - uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); - - if (unlikely(sw_cur_time - rxq->hw_time_update > 4)) { - hw_low_last = _mm256_setzero_si256(); - is_tsinit = 1; - } else { - hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); - } + hw_low_last = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 0, (uint32_t)rxq->phc_time); } #endif @@ -1343,11 +1335,9 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, *RTE_MBUF_DYNFIELD(rx_pkts[i + 7], iavf_timestamp_dynfield_offset, uint32_t *) = _mm256_extract_epi32(ts_low1, 7); - if (unlikely(is_tsinit)) { + { uint32_t in_timestamp; - if (iavf_get_phc_time(rxq)) - PMD_DRV_LOG(ERR, "get physical time failed"); in_timestamp = *RTE_MBUF_DYNFIELD(rx_pkts[i + 0], iavf_timestamp_dynfield_offset, uint32_t *); rxq->phc_time = iavf_tstamp_convert_32b_64b(rxq->phc_time, in_timestamp); @@ -1584,8 +1574,6 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct ci_rx_queue *rxq, PMD_DRV_LOG(ERR, "invalid inflection point for rx timestamp"); break; } - - rxq->hw_time_update = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); } #endif if (burst != IAVF_VPMD_DESCS_PER_LOOP_WIDE) -- 2.47.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v2 2/2] net/iavf: reuse device alarm for PHC sync 2026-04-02 15:48 ` [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-02 15:48 ` [PATCH v2 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore @ 2026-04-02 15:48 ` Soumyadeep Hore 1 sibling, 0 replies; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-02 15:48 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar Reuse existing iavf device alarm cadence to drive periodic PHC sync instead of a dedicated PHC alarm callback. Keep PHC start/stop hooks as pause/resume controls around queue reconfiguration and device lifecycle paths. Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com> --- drivers/net/intel/iavf/iavf.h | 5 ++ drivers/net/intel/iavf/iavf_ethdev.c | 70 ++++++++++++++++++++++++++++ drivers/net/intel/iavf/iavf_vchnl.c | 4 ++ 3 files changed, 79 insertions(+) diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index 403c61e2e8..2f1779d47b 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -76,6 +76,7 @@ #define IAVF_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ #define IAVF_ALARM_INTERVAL 50000 /* us */ +#define IAVF_PHC_SYNC_ALARM_INTERVAL_US 200000 /* The overhead from MTU to max frame size. * Considering QinQ packet, the VLAN tag needs to be counted twice. @@ -383,6 +384,8 @@ struct iavf_adapter { enum iavf_rx_func_type rx_func_type; enum iavf_tx_func_type tx_func_type; uint16_t fdir_ref_cnt; + uint8_t phc_sync_ticks; + bool phc_sync_paused; struct iavf_devargs devargs; bool mac_primary_set; }; @@ -517,6 +520,8 @@ void iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add); int iavf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete); void iavf_dev_alarm_handler(void *param); +void iavf_phc_sync_alarm_start(struct rte_eth_dev *dev); +void iavf_phc_sync_alarm_stop(struct rte_eth_dev *dev); int iavf_query_stats(struct iavf_adapter *adapter, struct virtchnl_eth_stats **pstats); int iavf_config_promisc(struct iavf_adapter *adapter, bool enable_unicast, diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c index 1eca20bc9a..02272d45c1 100644 --- a/drivers/net/intel/iavf/iavf_ethdev.c +++ b/drivers/net/intel/iavf/iavf_ethdev.c @@ -21,6 +21,7 @@ #include <rte_pci.h> #include <rte_alarm.h> #include <rte_atomic.h> +#include <rte_cycles.h> #include <rte_eal.h> #include <rte_ether.h> #include <ethdev_driver.h> @@ -145,6 +146,7 @@ static int iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); static void iavf_dev_interrupt_handler(void *param); static void iavf_disable_irq0(struct iavf_hw *hw); +static bool iavf_phc_sync_alarm_needed(struct rte_eth_dev *dev); static int iavf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); static int iavf_set_mc_addr_list(struct rte_eth_dev *dev, @@ -1056,6 +1058,8 @@ iavf_dev_start(struct rte_eth_dev *dev) goto error; } + iavf_phc_sync_alarm_start(dev); + return 0; error: @@ -1082,6 +1086,8 @@ iavf_dev_stop(struct rte_eth_dev *dev) if (adapter->stopped == 1) return 0; + iavf_phc_sync_alarm_stop(dev); + /* Disable the interrupt for Rx */ rte_intr_efd_disable(intr_handle); /* Rx interrupt vector mapping free */ @@ -2705,9 +2711,11 @@ void iavf_dev_alarm_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct iavf_adapter *adapter; if (dev == NULL || dev->data == NULL || dev->data->dev_private == NULL) return; + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint32_t icr0; @@ -2723,10 +2731,70 @@ iavf_dev_alarm_handler(void *param) iavf_enable_irq0(hw); + if (iavf_phc_sync_alarm_needed(dev) && !adapter->phc_sync_paused) { + adapter->phc_sync_ticks++; + if (adapter->phc_sync_ticks >= + IAVF_PHC_SYNC_ALARM_INTERVAL_US / IAVF_ALARM_INTERVAL) { + struct ci_rx_queue *rxq = dev->data->rx_queues[0]; + + adapter->phc_sync_ticks = 0; + if (iavf_get_phc_time(rxq) == 0) + rxq->hw_time_update = rte_get_timer_cycles() / + (rte_get_timer_hz() / 1000); + } + } else { + adapter->phc_sync_ticks = 0; + } + rte_eal_alarm_set(IAVF_ALARM_INTERVAL, iavf_dev_alarm_handler, dev); } +static bool +iavf_phc_sync_alarm_needed(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + + if (adapter->closed || adapter->stopped) + return false; + + if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) + return false; + + if (dev->data->nb_rx_queues == 0 || dev->data->rx_queues[0] == NULL) + return false; + + return true; +} + +void +iavf_phc_sync_alarm_start(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + if (!iavf_phc_sync_alarm_needed(dev)) + return; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + adapter->phc_sync_paused = false; + adapter->phc_sync_ticks = 0; +} + +void +iavf_phc_sync_alarm_stop(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + if (dev == NULL || dev->data == NULL || dev->data->dev_private == NULL) + return; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + adapter->phc_sync_paused = true; + adapter->phc_sync_ticks = 0; +} + static int iavf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops) @@ -2912,6 +2980,7 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, eth_dev); } + iavf_phc_sync_alarm_stop(eth_dev); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; @@ -2986,6 +3055,7 @@ iavf_dev_close(struct rte_eth_dev *dev) } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); } + iavf_phc_sync_alarm_stop(dev); iavf_disable_irq0(hw); if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS) diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c index 08dd6f2d7f..82943472e1 100644 --- a/drivers/net/intel/iavf/iavf_vchnl.c +++ b/drivers/net/intel/iavf/iavf_vchnl.c @@ -2133,12 +2133,16 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num) args.out_size = IAVF_AQ_BUF_SZ; if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { + iavf_phc_sync_alarm_stop(dev); err = iavf_execute_vf_cmd_safe(adapter, &args, 0); + iavf_phc_sync_alarm_start(dev); } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); + iavf_phc_sync_alarm_stop(dev); err = iavf_execute_vf_cmd_safe(adapter, &args, 0); rte_eal_alarm_set(IAVF_ALARM_INTERVAL, iavf_dev_alarm_handler, dev); + iavf_phc_sync_alarm_start(dev); } if (err) { -- 2.47.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v1 2/2] net/iavf: reuse device alarm for PHC sync 2026-04-02 15:21 [PATCH v1 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-02 15:21 ` [PATCH v1 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore @ 2026-04-02 15:21 ` Soumyadeep Hore 1 sibling, 0 replies; 13+ messages in thread From: Soumyadeep Hore @ 2026-04-02 15:21 UTC (permalink / raw) To: bruce.richardson, manoj.kumar.subbarao, aman.deep.singh, dev, rajesh3.kumar Reuse existing iavf device alarm cadence to drive periodic PHC sync instead of a dedicated PHC alarm callback. Keep PHC start/stop hooks as pause/resume controls around queue reconfiguration and device lifecycle paths. Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com> --- drivers/net/intel/iavf/iavf.h | 5 ++ drivers/net/intel/iavf/iavf_ethdev.c | 70 ++++++++++++++++++++++++++++ drivers/net/intel/iavf/iavf_vchnl.c | 4 ++ 3 files changed, 79 insertions(+) diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index 39949acc11..caba5b49cd 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -76,6 +76,7 @@ #define IAVF_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ #define IAVF_ALARM_INTERVAL 50000 /* us */ +#define IAVF_PHC_SYNC_ALARM_INTERVAL_US 200000 /* The overhead from MTU to max frame size. * Considering QinQ packet, the VLAN tag needs to be counted twice. @@ -383,6 +384,8 @@ struct iavf_adapter { enum iavf_rx_func_type rx_func_type; enum iavf_tx_func_type tx_func_type; uint16_t fdir_ref_cnt; + uint8_t phc_sync_ticks; + bool phc_sync_paused; struct iavf_devargs devargs; }; @@ -518,6 +521,8 @@ void iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add); int iavf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete); void iavf_dev_alarm_handler(void *param); +void iavf_phc_sync_alarm_start(struct rte_eth_dev *dev); +void iavf_phc_sync_alarm_stop(struct rte_eth_dev *dev); int iavf_query_stats(struct iavf_adapter *adapter, struct virtchnl_eth_stats **pstats); int iavf_config_promisc(struct iavf_adapter *adapter, bool enable_unicast, diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c index 802e095174..1cb78e2f36 100644 --- a/drivers/net/intel/iavf/iavf_ethdev.c +++ b/drivers/net/intel/iavf/iavf_ethdev.c @@ -21,6 +21,7 @@ #include <rte_pci.h> #include <rte_alarm.h> #include <rte_atomic.h> +#include <rte_cycles.h> #include <rte_eal.h> #include <rte_ether.h> #include <ethdev_driver.h> @@ -145,6 +146,7 @@ static int iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); static void iavf_dev_interrupt_handler(void *param); static void iavf_disable_irq0(struct iavf_hw *hw); +static bool iavf_phc_sync_alarm_needed(struct rte_eth_dev *dev); static int iavf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); static int iavf_set_mc_addr_list(struct rte_eth_dev *dev, @@ -1079,6 +1081,8 @@ iavf_dev_start(struct rte_eth_dev *dev) goto error; } + iavf_phc_sync_alarm_start(dev); + return 0; error: @@ -1105,6 +1109,8 @@ iavf_dev_stop(struct rte_eth_dev *dev) if (adapter->stopped == 1) return 0; + iavf_phc_sync_alarm_stop(dev); + /* Disable the interrupt for Rx */ rte_intr_efd_disable(intr_handle); /* Rx interrupt vector mapping free */ @@ -2723,9 +2729,11 @@ void iavf_dev_alarm_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct iavf_adapter *adapter; if (dev == NULL || dev->data == NULL || dev->data->dev_private == NULL) return; + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint32_t icr0; @@ -2741,10 +2749,70 @@ iavf_dev_alarm_handler(void *param) iavf_enable_irq0(hw); + if (iavf_phc_sync_alarm_needed(dev) && !adapter->phc_sync_paused) { + adapter->phc_sync_ticks++; + if (adapter->phc_sync_ticks >= + IAVF_PHC_SYNC_ALARM_INTERVAL_US / IAVF_ALARM_INTERVAL) { + struct ci_rx_queue *rxq = dev->data->rx_queues[0]; + + adapter->phc_sync_ticks = 0; + if (iavf_get_phc_time(rxq) == 0) + rxq->hw_time_update = rte_get_timer_cycles() / + (rte_get_timer_hz() / 1000); + } + } else { + adapter->phc_sync_ticks = 0; + } + rte_eal_alarm_set(IAVF_ALARM_INTERVAL, iavf_dev_alarm_handler, dev); } +static bool +iavf_phc_sync_alarm_needed(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + + if (adapter->closed || adapter->stopped) + return false; + + if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) + return false; + + if (dev->data->nb_rx_queues == 0 || dev->data->rx_queues[0] == NULL) + return false; + + return true; +} + +void +iavf_phc_sync_alarm_start(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + if (!iavf_phc_sync_alarm_needed(dev)) + return; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + adapter->phc_sync_paused = false; + adapter->phc_sync_ticks = 0; +} + +void +iavf_phc_sync_alarm_stop(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter; + + if (dev == NULL || dev->data == NULL || dev->data->dev_private == NULL) + return; + + adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + adapter->phc_sync_paused = true; + adapter->phc_sync_ticks = 0; +} + static int iavf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops) @@ -2924,6 +2992,7 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, eth_dev); } + iavf_phc_sync_alarm_stop(eth_dev); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; @@ -2995,6 +3064,7 @@ iavf_dev_close(struct rte_eth_dev *dev) } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); } + iavf_phc_sync_alarm_stop(dev); iavf_disable_irq0(hw); if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS) diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c index 9ad39300c6..4bd51dcc21 100644 --- a/drivers/net/intel/iavf/iavf_vchnl.c +++ b/drivers/net/intel/iavf/iavf_vchnl.c @@ -2091,12 +2091,16 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num) args.out_size = IAVF_AQ_BUF_SZ; if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { + iavf_phc_sync_alarm_stop(dev); err = iavf_execute_vf_cmd_safe(adapter, &args, 0); + iavf_phc_sync_alarm_start(dev); } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); + iavf_phc_sync_alarm_stop(dev); err = iavf_execute_vf_cmd_safe(adapter, &args, 0); rte_eal_alarm_set(IAVF_ALARM_INTERVAL, iavf_dev_alarm_handler, dev); + iavf_phc_sync_alarm_start(dev); } if (err) { -- 2.47.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
end of thread, other threads:[~2026-04-08 16:27 UTC | newest] Thread overview: 13+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-04-02 15:21 [PATCH v1 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-02 15:21 ` [PATCH v1 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore 2026-04-02 15:46 ` [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-02 15:46 ` [PATCH v2 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore 2026-04-06 21:22 ` [PATCH v3 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-06 21:22 ` [PATCH v3 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore 2026-04-08 16:27 ` Bruce Richardson 2026-04-06 21:22 ` [PATCH v3 2/2] net/iavf: reuse device alarm for PHC sync Soumyadeep Hore 2026-04-02 15:46 ` [PATCH v2 " Soumyadeep Hore 2026-04-02 15:48 ` [PATCH v2 0/2] Update Rx Timestamp in IAVF PMD Soumyadeep Hore 2026-04-02 15:48 ` [PATCH v2 1/2] net/iavf: remove PHC polling from Rx datapath Soumyadeep Hore 2026-04-02 15:48 ` [PATCH v2 2/2] net/iavf: reuse device alarm for PHC sync Soumyadeep Hore 2026-04-02 15:21 ` [PATCH v1 " Soumyadeep Hore
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox