public inbox for llvm@lists.linux.dev
 help / color / mirror / Atom feed
From: kernel test robot <lkp@intel.com>
To: Alexander Lobakin <aleksander.lobakin@intel.com>
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev
Subject: [alobakin:libeth 8/8] drivers/net/ethernet/intel/idpf/idpf_txrx.h:724:1: error: static assertion failed due to requirement '__builtin_offsetof(struct idpf_tx_queue, __cacheline_group_end__read_write) - (__builtin_offsetof(struct idpf_tx_queue, __cacheline_group_begin__read_wri...
Date: Thu, 16 Apr 2026 12:30:05 +0800	[thread overview]
Message-ID: <202604161224.45VyJGG4-lkp@intel.com> (raw)

tree:   https://github.com/alobakin/linux libeth
head:   2f4cad2257c116e3f47b7547bdd3412637a16b32
commit: 2f4cad2257c116e3f47b7547bdd3412637a16b32 [8/8] idpf: add flow-based XDP fallback for FWs without Tx FIFO support
config: s390-allmodconfig (https://download.01.org/0day-ci/archive/20260416/202604161224.45VyJGG4-lkp@intel.com/config)
compiler: clang version 18.1.8 (https://github.com/llvm/llvm-project 3b5b5c1ec4a3095ab096dd780e84d7ab81f3d7ff)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260416/202604161224.45VyJGG4-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202604161224.45VyJGG4-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from drivers/net/ethernet/intel/idpf/idpf_dev.c:4:
   In file included from drivers/net/ethernet/intel/idpf/idpf.h:28:
>> drivers/net/ethernet/intel/idpf/idpf_txrx.h:724:1: error: static assertion failed due to requirement '__builtin_offsetof(struct idpf_tx_queue, __cacheline_group_end__read_write) - (__builtin_offsetof(struct idpf_tx_queue, __cacheline_group_begin__read_write) + sizeof ((((struct idpf_tx_queue *)0)->__cacheline_group_begin__read_write))) <= (104 + __builtin_offsetof(struct idpf_tx_queue, cached_tstamp_caps) - (__builtin_offsetof(struct idpf_tx_queue, timer) + sizeof ((((struct idpf_tx_queue *)0)->timer))) + __builtin_offsetof(struct idpf_tx_queue, q_stats) - (__builtin_offsetof(struct idpf_tx_queue, tstamp_task) + sizeof ((((struct idpf_tx_queue *)0)->tstamp_task))))': offsetof(struct idpf_tx_queue, __cacheline_group_end__read_write) - offsetofend(struct idpf_tx_queue, __cacheline_group_begin__read_write) <= (104 + __builtin_offsetof(struct idpf_tx_queue, cached_tstamp_caps) - (__builtin_offsetof(struct idpf_tx_queue, timer) + sizeof((((struct idpf_tx_queue *)0)->timer))) + __builtin_offsetof(struct idpf_tx_queue, q_stats) - (__builtin_offsetof(struct idpf_tx_queue, tstamp_task) + sizeof((((struct idpf_tx_queue *)0)->tstamp_task))))
     724 | libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
         | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     725 |                             104 +
         |                             ~~~~~
     726 |                             offsetof(struct idpf_tx_queue, cached_tstamp_caps) -
         |                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     727 |                             offsetofend(struct idpf_tx_queue, timer) +
         |                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     728 |                             offsetof(struct idpf_tx_queue, q_stats) -
         |                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     729 |                             offsetofend(struct idpf_tx_queue, tstamp_task),
         |                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     730 |                             32);
         |                             ~~~
   include/net/libeth/cache.h:62:2: note: expanded from macro 'libeth_cacheline_set_assert'
      62 |         libeth_cacheline_group_assert(type, read_write, rw);                  \
         |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/net/libeth/cache.h:24:16: note: expanded from macro 'libeth_cacheline_group_assert'
      24 |         static_assert(offsetof(type, __cacheline_group_end__##grp) -          \
         |         ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      25 |                       offsetofend(type, __cacheline_group_begin__##grp) <=    \
         |                       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      26 |                       (sz))
         |                       ~~~~~
   include/linux/stddef.h:16:32: note: expanded from macro 'offsetof'
      16 | #define offsetof(TYPE, MEMBER)  __builtin_offsetof(TYPE, MEMBER)
         |                                 ^
   include/linux/build_bug.h:79:50: note: expanded from macro 'static_assert'
      79 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
         |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:80:56: note: expanded from macro '__static_assert'
      80 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
         |                                                        ^~~~
   drivers/net/ethernet/intel/idpf/idpf_txrx.h:724:1: note: expression evaluates to '184 <= 112'
     724 | libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
         | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     725 |                             104 +
         |                             ~~~~~
     726 |                             offsetof(struct idpf_tx_queue, cached_tstamp_caps) -
         |                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     727 |                             offsetofend(struct idpf_tx_queue, timer) +
         |                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     728 |                             offsetof(struct idpf_tx_queue, q_stats) -
         |                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     729 |                             offsetofend(struct idpf_tx_queue, tstamp_task),
         |                             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     730 |                             32);
         |                             ~~~
   include/net/libeth/cache.h:62:2: note: expanded from macro 'libeth_cacheline_set_assert'
      62 |         libeth_cacheline_group_assert(type, read_write, rw);                  \
         |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/net/libeth/cache.h:25:59: note: expanded from macro 'libeth_cacheline_group_assert'
      24 |         static_assert(offsetof(type, __cacheline_group_end__##grp) -          \
         |         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      25 |                       offsetofend(type, __cacheline_group_begin__##grp) <=    \
         |                       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~
      26 |                       (sz))
         |                       ~~~~~
   include/linux/build_bug.h:79:50: note: expanded from macro 'static_assert'
      79 | #define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
         |                                  ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:80:56: note: expanded from macro '__static_assert'
      80 | #define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
         |                                                        ^~~~
   1 error generated.


vim +724 drivers/net/ethernet/intel/idpf/idpf_txrx.h

1c325aac10a82f1 Alan Brady        2023-08-07  477  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  478  /**
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  479   * struct idpf_rx_queue - software structure representing a receive queue
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  480   * @rx: universal receive descriptor array
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  481   * @single_buf: buffer descriptor array in singleq
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  482   * @desc_ring: virtual descriptor ring address
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  483   * @bufq_sets: Pointer to the array of buffer queues in splitq mode
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  484   * @napi: NAPI instance corresponding to this queue (splitq)
705457e7211f22c Michal Kubiak     2025-08-26  485   * @xdp_prog: attached XDP program
74d1412ac8f3719 Alexander Lobakin 2024-06-20  486   * @rx_buf: See struct &libeth_fqe
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  487   * @pp: Page pool pointer in singleq mode
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  488   * @tail: Tail offset. Used for both queue models single and split.
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  489   * @flags: See enum idpf_queue_flags_t
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  490   * @idx: For RX queue, it is used to index to total RX queue across groups and
95af467d9a4e3be Alan Brady        2023-08-07  491   *	 used for skb reporting.
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  492   * @desc_count: Number of descriptors
ac8a861f632e68e Michal Kubiak     2025-08-26  493   * @num_xdp_txq: total number of XDP Tx queues
ac8a861f632e68e Michal Kubiak     2025-08-26  494   * @xdpsqs: shortcut for XDP Tx queues array
5a816aae2d463d7 Alexander Lobakin 2024-06-20  495   * @rxdids: Supported RX descriptor ids
ac8a861f632e68e Michal Kubiak     2025-08-26  496   * @truesize: data buffer truesize in singleq
5a816aae2d463d7 Alexander Lobakin 2024-06-20  497   * @rx_ptype_lkup: LUT of Rx ptypes
ac8a861f632e68e Michal Kubiak     2025-08-26  498   * @xdp_rxq: XDP queue info
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  499   * @next_to_use: Next descriptor to use
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  500   * @next_to_clean: Next descriptor to clean
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  501   * @next_to_alloc: RX buffer to allocate at
a4d755d1040a490 Alexander Lobakin 2025-08-26  502   * @xdp: XDP buffer with the current frame
9705d6552f5871a Alexander Lobakin 2025-09-11  503   * @xsk: current XDP buffer in XSk mode
9705d6552f5871a Alexander Lobakin 2025-09-11  504   * @pool: XSk pool if installed
494565a74502671 Milena Olech      2025-04-16  505   * @cached_phc_time: Cached PHC time for the Rx queue
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  506   * @stats_sync: See struct u64_stats_sync
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  507   * @q_stats: See union idpf_rx_queue_stats
1c325aac10a82f1 Alan Brady        2023-08-07  508   * @q_id: Queue id
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  509   * @size: Length of descriptor ring in bytes
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  510   * @dma: Physical address of ring
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  511   * @q_vector: Backreference to associated vector
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  512   * @rx_buffer_low_watermark: RX buffer low watermark
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  513   * @rx_hbuf_size: Header buffer size
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  514   * @rx_buf_size: Buffer size
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  515   * @rx_max_pkt_size: RX max packet size
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  516   */
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  517  struct idpf_rx_queue {
5a816aae2d463d7 Alexander Lobakin 2024-06-20  518  	__cacheline_group_begin_aligned(read_mostly);
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  519  	union {
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  520  		union virtchnl2_rx_desc *rx;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  521  		struct virtchnl2_singleq_rx_buf_desc *single_buf;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  522  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  523  		void *desc_ring;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  524  	};
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  525  	union {
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  526  		struct {
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  527  			struct idpf_bufq_set *bufq_sets;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  528  			struct napi_struct *napi;
705457e7211f22c Michal Kubiak     2025-08-26  529  			struct bpf_prog __rcu *xdp_prog;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  530  		};
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  531  		struct {
74d1412ac8f3719 Alexander Lobakin 2024-06-20  532  			struct libeth_fqe *rx_buf;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  533  			struct page_pool *pp;
705457e7211f22c Michal Kubiak     2025-08-26  534  			void __iomem *tail;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  535  		};
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  536  	};
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  537  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  538  	DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  539  	u16 idx;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  540  	u16 desc_count;
5a816aae2d463d7 Alexander Lobakin 2024-06-20  541  
ac8a861f632e68e Michal Kubiak     2025-08-26  542  	u32 num_xdp_txq;
ac8a861f632e68e Michal Kubiak     2025-08-26  543  	union {
ac8a861f632e68e Michal Kubiak     2025-08-26  544  		struct idpf_tx_queue **xdpsqs;
ac8a861f632e68e Michal Kubiak     2025-08-26  545  		struct {
5a816aae2d463d7 Alexander Lobakin 2024-06-20  546  			u32 rxdids;
ac8a861f632e68e Michal Kubiak     2025-08-26  547  			u32 truesize;
ac8a861f632e68e Michal Kubiak     2025-08-26  548  		};
ac8a861f632e68e Michal Kubiak     2025-08-26  549  	};
1b1b26208515482 Alexander Lobakin 2024-06-20  550  	const struct libeth_rx_pt *rx_ptype_lkup;
ac8a861f632e68e Michal Kubiak     2025-08-26  551  
ac8a861f632e68e Michal Kubiak     2025-08-26  552  	struct xdp_rxq_info xdp_rxq;
5a816aae2d463d7 Alexander Lobakin 2024-06-20  553  	__cacheline_group_end_aligned(read_mostly);
5a816aae2d463d7 Alexander Lobakin 2024-06-20  554  
5a816aae2d463d7 Alexander Lobakin 2024-06-20  555  	__cacheline_group_begin_aligned(read_write);
a4d755d1040a490 Alexander Lobakin 2025-08-26  556  	u32 next_to_use;
a4d755d1040a490 Alexander Lobakin 2025-08-26  557  	u32 next_to_clean;
a4d755d1040a490 Alexander Lobakin 2025-08-26  558  	u32 next_to_alloc;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  559  
9705d6552f5871a Alexander Lobakin 2025-09-11  560  	union {
a4d755d1040a490 Alexander Lobakin 2025-08-26  561  		struct libeth_xdp_buff_stash xdp;
9705d6552f5871a Alexander Lobakin 2025-09-11  562  		struct {
9705d6552f5871a Alexander Lobakin 2025-09-11  563  			struct libeth_xdp_buff *xsk;
9705d6552f5871a Alexander Lobakin 2025-09-11  564  			struct xsk_buff_pool *pool;
9705d6552f5871a Alexander Lobakin 2025-09-11  565  		};
9705d6552f5871a Alexander Lobakin 2025-09-11  566  	};
494565a74502671 Milena Olech      2025-04-16  567  	u64 cached_phc_time;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  568  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  569  	struct u64_stats_sync stats_sync;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  570  	struct idpf_rx_queue_stats q_stats;
5a816aae2d463d7 Alexander Lobakin 2024-06-20  571  	__cacheline_group_end_aligned(read_write);
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  572  
5a816aae2d463d7 Alexander Lobakin 2024-06-20  573  	__cacheline_group_begin_aligned(cold);
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  574  	u32 q_id;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  575  	u32 size;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  576  	dma_addr_t dma;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  577  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  578  	struct idpf_q_vector *q_vector;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  579  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  580  	u16 rx_buffer_low_watermark;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  581  	u16 rx_hbuf_size;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  582  	u16 rx_buf_size;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  583  	u16 rx_max_pkt_size;
5a816aae2d463d7 Alexander Lobakin 2024-06-20  584  	__cacheline_group_end_aligned(cold);
5a816aae2d463d7 Alexander Lobakin 2024-06-20  585  };
ac8a861f632e68e Michal Kubiak     2025-08-26  586  libeth_cacheline_set_assert(struct idpf_rx_queue,
ac8a861f632e68e Michal Kubiak     2025-08-26  587  			    ALIGN(64, __alignof(struct xdp_rxq_info)) +
ac8a861f632e68e Michal Kubiak     2025-08-26  588  			    sizeof(struct xdp_rxq_info),
a4d755d1040a490 Alexander Lobakin 2025-08-26  589  			    96 + offsetof(struct idpf_rx_queue, q_stats) -
a4d755d1040a490 Alexander Lobakin 2025-08-26  590  			    offsetofend(struct idpf_rx_queue, cached_phc_time),
5a816aae2d463d7 Alexander Lobakin 2024-06-20  591  			    32);
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  592  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  593  /**
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  594   * struct idpf_tx_queue - software structure representing a transmit queue
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  595   * @base_tx: base Tx descriptor array
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  596   * @base_ctx: base Tx context descriptor array
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  597   * @flex_tx: flex Tx descriptor array
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  598   * @flex_ctx: flex Tx context descriptor array
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  599   * @desc_ring: virtual descriptor ring address
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  600   * @tx_buf: See struct idpf_tx_buf
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  601   * @txq_grp: See struct idpf_txq_group
ac8a861f632e68e Michal Kubiak     2025-08-26  602   * @complq: corresponding completion queue in XDP mode
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  603   * @dev: Device back pointer for DMA mapping
8ff6d62261a3d9a Alexander Lobakin 2025-09-11  604   * @pool: corresponding XSk pool if installed
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  605   * @tail: Tail offset. Used for both queue models single and split
1c325aac10a82f1 Alan Brady        2023-08-07  606   * @flags: See enum idpf_queue_flags_t
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  607   * @idx: For TX queue, it is used as index to map between TX queue group and
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  608   *	 hot path TX pointers stored in vport. Used in both singleq/splitq.
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  609   * @desc_count: Number of descriptors
1c325aac10a82f1 Alan Brady        2023-08-07  610   * @tx_min_pkt_len: Min supported packet length
ac8a861f632e68e Michal Kubiak     2025-08-26  611   * @thresh: XDP queue cleaning threshold
5a816aae2d463d7 Alexander Lobakin 2024-06-20  612   * @netdev: &net_device corresponding to this queue
5a816aae2d463d7 Alexander Lobakin 2024-06-20  613   * @next_to_use: Next descriptor to use
5a816aae2d463d7 Alexander Lobakin 2024-06-20  614   * @next_to_clean: Next descriptor to clean
f2d18e16479cac7 Joshua Hay        2025-07-25  615   * @last_re: last descriptor index that RE bit was set
f2d18e16479cac7 Joshua Hay        2025-07-25  616   * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
5a816aae2d463d7 Alexander Lobakin 2024-06-20  617   * @cleaned_bytes: Splitq only, TXQ only: When a TX completion is received on
5a816aae2d463d7 Alexander Lobakin 2024-06-20  618   *		   the TX completion queue, it can be for any TXQ associated
5a816aae2d463d7 Alexander Lobakin 2024-06-20  619   *		   with that completion queue. This means we can clean up to
5a816aae2d463d7 Alexander Lobakin 2024-06-20  620   *		   N TXQs during a single call to clean the completion queue.
5a816aae2d463d7 Alexander Lobakin 2024-06-20  621   *		   cleaned_bytes|pkts tracks the clean stats per TXQ during
5a816aae2d463d7 Alexander Lobakin 2024-06-20  622   *		   that single call to clean the completion queue. By doing so,
5a816aae2d463d7 Alexander Lobakin 2024-06-20  623   *		   we can update BQL with aggregate cleaned stats for each TXQ
5a816aae2d463d7 Alexander Lobakin 2024-06-20  624   *		   only once at the end of the cleaning routine.
5a816aae2d463d7 Alexander Lobakin 2024-06-20  625   * @clean_budget: singleq only, queue cleaning budget
5a816aae2d463d7 Alexander Lobakin 2024-06-20  626   * @cleaned_pkts: Number of packets cleaned for the above said case
cb83b559bea39f2 Joshua Hay        2025-07-25  627   * @refillq: Pointer to refill queue
2f4cad2257c116e Alexander Lobakin 2026-04-15  628   * @cached_tstamp_caps: Tx timestamp capabilities negotiated with the CP
ac8a861f632e68e Michal Kubiak     2025-08-26  629   * @pending: number of pending descriptors to send in QB
ac8a861f632e68e Michal Kubiak     2025-08-26  630   * @xdp_tx: number of pending &xdp_buff or &xdp_frame buffers
ac8a861f632e68e Michal Kubiak     2025-08-26  631   * @timer: timer for XDP Tx queue cleanup
ac8a861f632e68e Michal Kubiak     2025-08-26  632   * @xdp_lock: lock for XDP Tx queues sharing
2f4cad2257c116e Alexander Lobakin 2026-04-15  633   * @pending_mask: mask of buffers waiting for completion in the FB XDP mode
1a49cf814fe1edf Milena Olech      2025-04-16  634   * @tstamp_task: Work that handles Tx timestamp read
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  635   * @stats_sync: See struct u64_stats_sync
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  636   * @q_stats: See union idpf_tx_queue_stats
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  637   * @q_id: Queue id
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  638   * @size: Length of descriptor ring in bytes
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  639   * @dma: Physical address of ring
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  640   * @q_vector: Backreference to associated vector
5f417d551324d28 Joshua Hay        2025-07-25  641   * @buf_pool_size: Total number of idpf_tx_buf
6b8e30b640653bb Michal Kubiak     2025-09-11  642   * @rel_q_id: relative virtchnl queue index
1c325aac10a82f1 Alan Brady        2023-08-07  643   */
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  644  struct idpf_tx_queue {
5a816aae2d463d7 Alexander Lobakin 2024-06-20  645  	__cacheline_group_begin_aligned(read_mostly);
95af467d9a4e3be Alan Brady        2023-08-07  646  	union {
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  647  		struct idpf_base_tx_desc *base_tx;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  648  		struct idpf_base_tx_ctx_desc *base_ctx;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  649  		union idpf_tx_flex_desc *flex_tx;
1a49cf814fe1edf Milena Olech      2025-04-16  650  		union idpf_flex_tx_ctx_desc *flex_ctx;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  651  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  652  		void *desc_ring;
95af467d9a4e3be Alan Brady        2023-08-07  653  	};
d9028db618a63e4 Alexander Lobakin 2024-09-04  654  	struct libeth_sqe *tx_buf;
ac8a861f632e68e Michal Kubiak     2025-08-26  655  	union {
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  656  		struct idpf_txq_group *txq_grp;
ac8a861f632e68e Michal Kubiak     2025-08-26  657  		struct idpf_compl_queue *complq;
ac8a861f632e68e Michal Kubiak     2025-08-26  658  	};
8ff6d62261a3d9a Alexander Lobakin 2025-09-11  659  	union {
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  660  		struct device *dev;
8ff6d62261a3d9a Alexander Lobakin 2025-09-11  661  		struct xsk_buff_pool *pool;
8ff6d62261a3d9a Alexander Lobakin 2025-09-11  662  	};
1c325aac10a82f1 Alan Brady        2023-08-07  663  	void __iomem *tail;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  664  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  665  	DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  666  	u16 idx;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  667  	u16 desc_count;
5a816aae2d463d7 Alexander Lobakin 2024-06-20  668  
ac8a861f632e68e Michal Kubiak     2025-08-26  669  	union {
5a816aae2d463d7 Alexander Lobakin 2024-06-20  670  		u16 tx_min_pkt_len;
ac8a861f632e68e Michal Kubiak     2025-08-26  671  		u32 thresh;
ac8a861f632e68e Michal Kubiak     2025-08-26  672  	};
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  673  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  674  	struct net_device *netdev;
5a816aae2d463d7 Alexander Lobakin 2024-06-20  675  	__cacheline_group_end_aligned(read_mostly);
5a816aae2d463d7 Alexander Lobakin 2024-06-20  676  
5a816aae2d463d7 Alexander Lobakin 2024-06-20  677  	__cacheline_group_begin_aligned(read_write);
cba102cd719029a Alexander Lobakin 2025-08-26  678  	u32 next_to_use;
cba102cd719029a Alexander Lobakin 2025-08-26  679  	u32 next_to_clean;
ac8a861f632e68e Michal Kubiak     2025-08-26  680  
ac8a861f632e68e Michal Kubiak     2025-08-26  681  	union {
ac8a861f632e68e Michal Kubiak     2025-08-26  682  		struct {
f2d18e16479cac7 Joshua Hay        2025-07-25  683  			u16 last_re;
f2d18e16479cac7 Joshua Hay        2025-07-25  684  			u16 tx_max_bufs;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  685  
95af467d9a4e3be Alan Brady        2023-08-07  686  			union {
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  687  				u32 cleaned_bytes;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  688  				u32 clean_budget;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  689  			};
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  690  			u16 cleaned_pkts;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  691  
cb83b559bea39f2 Joshua Hay        2025-07-25  692  			struct idpf_sw_queue *refillq;
2f4cad2257c116e Alexander Lobakin 2026-04-15  693  
2f4cad2257c116e Alexander Lobakin 2026-04-15  694  			struct idpf_ptp_vport_tx_tstamp_caps *cached_tstamp_caps;
ac8a861f632e68e Michal Kubiak     2025-08-26  695  		};
ac8a861f632e68e Michal Kubiak     2025-08-26  696  		struct {
ac8a861f632e68e Michal Kubiak     2025-08-26  697  			u32 pending;
ac8a861f632e68e Michal Kubiak     2025-08-26  698  			u32 xdp_tx;
ac8a861f632e68e Michal Kubiak     2025-08-26  699  
ac8a861f632e68e Michal Kubiak     2025-08-26  700  			struct libeth_xdpsq_timer *timer;
ac8a861f632e68e Michal Kubiak     2025-08-26  701  			struct libeth_xdpsq_lock xdp_lock;
2f4cad2257c116e Alexander Lobakin 2026-04-15  702  
2f4cad2257c116e Alexander Lobakin 2026-04-15  703  			unsigned long *pending_mask;
ac8a861f632e68e Michal Kubiak     2025-08-26  704  		};
ac8a861f632e68e Michal Kubiak     2025-08-26  705  	};
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  706  
1a49cf814fe1edf Milena Olech      2025-04-16  707  	struct work_struct *tstamp_task;
1a49cf814fe1edf Milena Olech      2025-04-16  708  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  709  	struct u64_stats_sync stats_sync;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  710  	struct idpf_tx_queue_stats q_stats;
5a816aae2d463d7 Alexander Lobakin 2024-06-20  711  	__cacheline_group_end_aligned(read_write);
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  712  
5a816aae2d463d7 Alexander Lobakin 2024-06-20  713  	__cacheline_group_begin_aligned(cold);
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  714  	u32 q_id;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  715  	u32 size;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  716  	dma_addr_t dma;
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  717  
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  718  	struct idpf_q_vector *q_vector;
6b8e30b640653bb Michal Kubiak     2025-09-11  719  
5f417d551324d28 Joshua Hay        2025-07-25  720  	u32 buf_pool_size;
6b8e30b640653bb Michal Kubiak     2025-09-11  721  	u32 rel_q_id;
5a816aae2d463d7 Alexander Lobakin 2024-06-20  722  	__cacheline_group_end_aligned(cold);
5a816aae2d463d7 Alexander Lobakin 2024-06-20  723  };
5a816aae2d463d7 Alexander Lobakin 2024-06-20 @724  libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
ac8a861f632e68e Michal Kubiak     2025-08-26  725  			    104 +
ac8a861f632e68e Michal Kubiak     2025-08-26  726  			    offsetof(struct idpf_tx_queue, cached_tstamp_caps) -
ac8a861f632e68e Michal Kubiak     2025-08-26  727  			    offsetofend(struct idpf_tx_queue, timer) +
ac8a861f632e68e Michal Kubiak     2025-08-26  728  			    offsetof(struct idpf_tx_queue, q_stats) -
ac8a861f632e68e Michal Kubiak     2025-08-26  729  			    offsetofend(struct idpf_tx_queue, tstamp_task),
5f417d551324d28 Joshua Hay        2025-07-25  730  			    32);
e4891e4687c8dd1 Alexander Lobakin 2024-06-20  731  

:::::: The code at line 724 was first introduced by commit
:::::: 5a816aae2d463d74882e21672ac5366573b0c511 idpf: strictly assert cachelines of queue and queue vector structures

:::::: TO: Alexander Lobakin <aleksander.lobakin@intel.com>
:::::: CC: Tony Nguyen <anthony.l.nguyen@intel.com>

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

                 reply	other threads:[~2026-04-16  4:30 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202604161224.45VyJGG4-lkp@intel.com \
    --to=lkp@intel.com \
    --cc=aleksander.lobakin@intel.com \
    --cc=llvm@lists.linux.dev \
    --cc=oe-kbuild-all@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox