* [PATCH net-next v2 1/9] lib: packing: create __pack() and __unpack() variants without error checking
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
@ 2024-10-26 0:04 ` Jacob Keller
2024-10-26 0:04 ` [PATCH net-next v2 2/9] lib: packing: demote truncation error in pack() to a warning in __pack() Jacob Keller
` (9 subsequent siblings)
10 siblings, 0 replies; 23+ messages in thread
From: Jacob Keller @ 2024-10-26 0:04 UTC (permalink / raw)
To: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada
Cc: linux-kernel, netdev, Jacob Keller, Vladimir Oltean
From: Vladimir Oltean <vladimir.oltean@nxp.com>
A future variant of the API, which works on arrays of packed_field
structures, will make most of these checks redundant. The idea will be
that we want to perform sanity checks at compile time, not once
for every function call.
Introduce new variants of pack() and unpack(), which elide the sanity
checks, assuming that the input was pre-sanitized.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
---
lib/packing.c | 142 ++++++++++++++++++++++++++++++++--------------------------
1 file changed, 78 insertions(+), 64 deletions(-)
diff --git a/lib/packing.c b/lib/packing.c
index 793942745e34..c29b079fdd78 100644
--- a/lib/packing.c
+++ b/lib/packing.c
@@ -51,64 +51,20 @@ static size_t calculate_box_addr(size_t box, size_t len, u8 quirks)
return offset_of_group + offset_in_group;
}
-/**
- * pack - Pack u64 number into bitfield of buffer.
- *
- * @pbuf: Pointer to a buffer holding the packed value.
- * @uval: CPU-readable unpacked value to pack.
- * @startbit: The index (in logical notation, compensated for quirks) where
- * the packed value starts within pbuf. Must be larger than, or
- * equal to, endbit.
- * @endbit: The index (in logical notation, compensated for quirks) where
- * the packed value ends within pbuf. Must be smaller than, or equal
- * to, startbit.
- * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf.
- * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and
- * QUIRK_MSB_ON_THE_RIGHT.
- *
- * Return: 0 on success, EINVAL or ERANGE if called incorrectly. Assuming
- * correct usage, return code may be discarded. The @pbuf memory will
- * be modified on success.
- */
-int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen,
- u8 quirks)
+static void __pack(void *pbuf, u64 uval, size_t startbit, size_t endbit,
+ size_t pbuflen, u8 quirks)
{
/* Logical byte indices corresponding to the
* start and end of the field.
*/
- int plogical_first_u8, plogical_last_u8, box;
- /* width of the field to access in the pbuf */
- u64 value_width;
-
- /* startbit is expected to be larger than endbit, and both are
- * expected to be within the logically addressable range of the buffer.
- */
- if (unlikely(startbit < endbit || startbit >= BITS_PER_BYTE * pbuflen))
- /* Invalid function call */
- return -EINVAL;
-
- value_width = startbit - endbit + 1;
- if (unlikely(value_width > 64))
- return -ERANGE;
-
- /* Check if "uval" fits in "value_width" bits.
- * If value_width is 64, the check will fail, but any
- * 64-bit uval will surely fit.
- */
- if (unlikely(value_width < 64 && uval >= (1ull << value_width)))
- /* Cannot store "uval" inside "value_width" bits.
- * Truncating "uval" is most certainly not desirable,
- * so simply erroring out is appropriate.
- */
- return -ERANGE;
+ int plogical_first_u8 = startbit / BITS_PER_BYTE;
+ int plogical_last_u8 = endbit / BITS_PER_BYTE;
+ int box;
/* Iterate through an idealistic view of the pbuf as an u64 with
* no quirks, u8 by u8 (aligned at u8 boundaries), from high to low
* logical bit significance. "box" denotes the current logical u8.
*/
- plogical_first_u8 = startbit / BITS_PER_BYTE;
- plogical_last_u8 = endbit / BITS_PER_BYTE;
-
for (box = plogical_first_u8; box >= plogical_last_u8; box--) {
/* Bit indices into the currently accessed 8-bit box */
size_t box_start_bit, box_end_bit, box_addr;
@@ -163,15 +119,13 @@ int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen,
((u8 *)pbuf)[box_addr] &= ~box_mask;
((u8 *)pbuf)[box_addr] |= pval;
}
- return 0;
}
-EXPORT_SYMBOL(pack);
/**
- * unpack - Unpack u64 number from packed buffer.
+ * pack - Pack u64 number into bitfield of buffer.
*
* @pbuf: Pointer to a buffer holding the packed value.
- * @uval: Pointer to an u64 holding the unpacked value.
+ * @uval: CPU-readable unpacked value to pack.
* @startbit: The index (in logical notation, compensated for quirks) where
* the packed value starts within pbuf. Must be larger than, or
* equal to, endbit.
@@ -183,16 +137,12 @@ EXPORT_SYMBOL(pack);
* QUIRK_MSB_ON_THE_RIGHT.
*
* Return: 0 on success, EINVAL or ERANGE if called incorrectly. Assuming
- * correct usage, return code may be discarded. The @uval will be
- * modified on success.
+ * correct usage, return code may be discarded. The @pbuf memory will
+ * be modified on success.
*/
-int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit,
- size_t pbuflen, u8 quirks)
+int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen,
+ u8 quirks)
{
- /* Logical byte indices corresponding to the
- * start and end of the field.
- */
- int plogical_first_u8, plogical_last_u8, box;
/* width of the field to access in the pbuf */
u64 value_width;
@@ -207,6 +157,33 @@ int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit,
if (unlikely(value_width > 64))
return -ERANGE;
+ /* Check if "uval" fits in "value_width" bits.
+ * If value_width is 64, the check will fail, but any
+ * 64-bit uval will surely fit.
+ */
+ if (value_width < 64 && uval >= (1ull << value_width))
+ /* Cannot store "uval" inside "value_width" bits.
+ * Truncating "uval" is most certainly not desirable,
+ * so simply erroring out is appropriate.
+ */
+ return -ERANGE;
+
+ __pack(pbuf, uval, startbit, endbit, pbuflen, quirks);
+
+ return 0;
+}
+EXPORT_SYMBOL(pack);
+
+static void __unpack(const void *pbuf, u64 *uval, size_t startbit,
+ size_t endbit, size_t pbuflen, u8 quirks)
+{
+ /* Logical byte indices corresponding to the
+ * start and end of the field.
+ */
+ int plogical_first_u8 = startbit / BITS_PER_BYTE;
+ int plogical_last_u8 = endbit / BITS_PER_BYTE;
+ int box;
+
/* Initialize parameter */
*uval = 0;
@@ -214,9 +191,6 @@ int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit,
* no quirks, u8 by u8 (aligned at u8 boundaries), from high to low
* logical bit significance. "box" denotes the current logical u8.
*/
- plogical_first_u8 = startbit / BITS_PER_BYTE;
- plogical_last_u8 = endbit / BITS_PER_BYTE;
-
for (box = plogical_first_u8; box >= plogical_last_u8; box--) {
/* Bit indices into the currently accessed 8-bit box */
size_t box_start_bit, box_end_bit, box_addr;
@@ -271,6 +245,46 @@ int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit,
*uval &= ~proj_mask;
*uval |= pval;
}
+}
+
+/**
+ * unpack - Unpack u64 number from packed buffer.
+ *
+ * @pbuf: Pointer to a buffer holding the packed value.
+ * @uval: Pointer to an u64 holding the unpacked value.
+ * @startbit: The index (in logical notation, compensated for quirks) where
+ * the packed value starts within pbuf. Must be larger than, or
+ * equal to, endbit.
+ * @endbit: The index (in logical notation, compensated for quirks) where
+ * the packed value ends within pbuf. Must be smaller than, or equal
+ * to, startbit.
+ * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf.
+ * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and
+ * QUIRK_MSB_ON_THE_RIGHT.
+ *
+ * Return: 0 on success, EINVAL or ERANGE if called incorrectly. Assuming
+ * correct usage, return code may be discarded. The @uval will be
+ * modified on success.
+ */
+int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit,
+ size_t pbuflen, u8 quirks)
+{
+ /* width of the field to access in the pbuf */
+ u64 value_width;
+
+ /* startbit is expected to be larger than endbit, and both are
+ * expected to be within the logically addressable range of the buffer.
+ */
+ if (startbit < endbit || startbit >= BITS_PER_BYTE * pbuflen)
+ /* Invalid function call */
+ return -EINVAL;
+
+ value_width = startbit - endbit + 1;
+ if (value_width > 64)
+ return -ERANGE;
+
+ __unpack(pbuf, uval, startbit, endbit, pbuflen, quirks);
+
return 0;
}
EXPORT_SYMBOL(unpack);
--
2.47.0.265.g4ca455297942
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH net-next v2 2/9] lib: packing: demote truncation error in pack() to a warning in __pack()
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
2024-10-26 0:04 ` [PATCH net-next v2 1/9] lib: packing: create __pack() and __unpack() variants without error checking Jacob Keller
@ 2024-10-26 0:04 ` Jacob Keller
2024-10-26 0:04 ` [PATCH net-next v2 3/9] lib: packing: add pack_fields() and unpack_fields() Jacob Keller
` (8 subsequent siblings)
10 siblings, 0 replies; 23+ messages in thread
From: Jacob Keller @ 2024-10-26 0:04 UTC (permalink / raw)
To: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada
Cc: linux-kernel, netdev, Jacob Keller, Vladimir Oltean
From: Vladimir Oltean <vladimir.oltean@nxp.com>
Most of the sanity checks in pack() and unpack() can be covered at
compile time. There is only one exception, and that is truncation of the
uval during a pack() operation.
We'd like the error-less __pack() to catch that condition as well. But
at the same time, it is currently the responsibility of consumer drivers
(currently just sja1105) to print anything at all when this error
occurs, and then discard the return code.
We can just print a loud warning in the library code and continue with
the truncated __pack() operation. In practice, having the warning is
very important, see commit 24deec6b9e4a ("net: dsa: sja1105: disallow
C45 transactions on the BASE-TX MDIO bus") where the bug was caught
exactly by noticing this print.
Add the first print to the packing library, and at the same time remove
the print for the same condition from the sja1105 driver, to avoid
double printing.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
---
drivers/net/dsa/sja1105/sja1105_static_config.c | 8 ++------
lib/packing.c | 26 ++++++++++---------------
2 files changed, 12 insertions(+), 22 deletions(-)
diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.c b/drivers/net/dsa/sja1105/sja1105_static_config.c
index baba204ad62f..3d790f8c6f4d 100644
--- a/drivers/net/dsa/sja1105/sja1105_static_config.c
+++ b/drivers/net/dsa/sja1105/sja1105_static_config.c
@@ -26,12 +26,8 @@ void sja1105_pack(void *buf, const u64 *val, int start, int end, size_t len)
pr_err("Start bit (%d) expected to be larger than end (%d)\n",
start, end);
} else if (rc == -ERANGE) {
- if ((start - end + 1) > 64)
- pr_err("Field %d-%d too large for 64 bits!\n",
- start, end);
- else
- pr_err("Cannot store %llx inside bits %d-%d (would truncate)\n",
- *val, start, end);
+ pr_err("Field %d-%d too large for 64 bits!\n",
+ start, end);
}
dump_stack();
}
diff --git a/lib/packing.c b/lib/packing.c
index c29b079fdd78..2bf81951dfc8 100644
--- a/lib/packing.c
+++ b/lib/packing.c
@@ -59,8 +59,17 @@ static void __pack(void *pbuf, u64 uval, size_t startbit, size_t endbit,
*/
int plogical_first_u8 = startbit / BITS_PER_BYTE;
int plogical_last_u8 = endbit / BITS_PER_BYTE;
+ int value_width = startbit - endbit + 1;
int box;
+ /* Check if "uval" fits in "value_width" bits.
+ * The test only works for value_width < 64, but in the latter case,
+ * any 64-bit uval will surely fit.
+ */
+ WARN(value_width < 64 && uval >= (1ull << value_width),
+ "Cannot store 0x%llx inside bits %zu-%zu - will truncate\n",
+ uval, startbit, endbit);
+
/* Iterate through an idealistic view of the pbuf as an u64 with
* no quirks, u8 by u8 (aligned at u8 boundaries), from high to low
* logical bit significance. "box" denotes the current logical u8.
@@ -143,9 +152,6 @@ static void __pack(void *pbuf, u64 uval, size_t startbit, size_t endbit,
int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen,
u8 quirks)
{
- /* width of the field to access in the pbuf */
- u64 value_width;
-
/* startbit is expected to be larger than endbit, and both are
* expected to be within the logically addressable range of the buffer.
*/
@@ -153,19 +159,7 @@ int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen,
/* Invalid function call */
return -EINVAL;
- value_width = startbit - endbit + 1;
- if (unlikely(value_width > 64))
- return -ERANGE;
-
- /* Check if "uval" fits in "value_width" bits.
- * If value_width is 64, the check will fail, but any
- * 64-bit uval will surely fit.
- */
- if (value_width < 64 && uval >= (1ull << value_width))
- /* Cannot store "uval" inside "value_width" bits.
- * Truncating "uval" is most certainly not desirable,
- * so simply erroring out is appropriate.
- */
+ if (unlikely(startbit - endbit >= 64))
return -ERANGE;
__pack(pbuf, uval, startbit, endbit, pbuflen, quirks);
--
2.47.0.265.g4ca455297942
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH net-next v2 3/9] lib: packing: add pack_fields() and unpack_fields()
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
2024-10-26 0:04 ` [PATCH net-next v2 1/9] lib: packing: create __pack() and __unpack() variants without error checking Jacob Keller
2024-10-26 0:04 ` [PATCH net-next v2 2/9] lib: packing: demote truncation error in pack() to a warning in __pack() Jacob Keller
@ 2024-10-26 0:04 ` Jacob Keller
2024-11-03 18:31 ` Jakub Kicinski
2024-10-26 0:04 ` [PATCH net-next v2 4/9] ice: remove int_q_state from ice_tlan_ctx Jacob Keller
` (7 subsequent siblings)
10 siblings, 1 reply; 23+ messages in thread
From: Jacob Keller @ 2024-10-26 0:04 UTC (permalink / raw)
To: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada
Cc: linux-kernel, netdev, Jacob Keller, Vladimir Oltean
From: Vladimir Oltean <vladimir.oltean@nxp.com>
This is new API which caters to the following requirements:
- Pack or unpack a large number of fields to/from a buffer with a small
code footprint. The current alternative is to open-code a large number
of calls to pack() and unpack(), or to use packing() to reduce that
number to half. But packing() is not const-correct.
- Use unpacked numbers stored in variables smaller than u64. This
reduces the rodata footprint of the stored field arrays.
- Perform error checking at compile time, rather than at runtime, and
return void from the API functions. To that end, we introduce
CHECK_PACKED_FIELD_*() macros to be used on the arrays of packed
fields. Note: the C preprocessor can't generate variable-length code
(loops), as would be required for array-style definitions of struct
packed_field arrays. So the sanity checks use code generation at
compile time to $KBUILD_OUTPUT/include/generated/packing-checks.h.
There are explicit macros for sanity-checking arrays of 1 packed
field, 2 packed fields, 3 packed fields, ..., all the way to 50 packed
fields. In practice, the sja1105 driver will actually need the variant
with 40 fields. This isn't as bad as it seems: feeding a 39 entry
sized array into the CHECK_PACKED_FIELDS_40() macro will actually
generate a compilation error, so mistakes are very likely to be caught
by the developer and thus are not a problem. To limit the amount of code
generated, limit each CHECK_PACKED_FIELDS* macro to a separate config
option which the driver must select. This avoids generating thousands of
unused lines of macro.
- Reduced rodata footprint for the storage of the packed field arrays.
To that end, we have struct packed_field_s (small) and packed_field_m
(medium). More can be added as needed (unlikely for now). On these
types, the same generic pack_fields() and unpack_fields() API can be
used, thanks to the new C11 _Generic() selection feature, which can
call pack_fields_s() or pack_fields_m(), depending on the type of the
"fields" array - a simplistic form of polymorphism. It is evaluated at
compile time which function will actually be called.
Over time, packing() is expected to be completely replaced either with
pack() or with pack_fields().
Co-developed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
---
include/linux/packing.h | 73 ++++++++++
lib/gen_packing_checks.c | 193 +++++++++++++++++++++++++
lib/packing.c | 149 ++++++++++++++++++-
Kbuild | 168 +++++++++++++++++++++-
lib/Kconfig | 361 ++++++++++++++++++++++++++++++++++++++++++++++-
5 files changed, 940 insertions(+), 4 deletions(-)
diff --git a/include/linux/packing.h b/include/linux/packing.h
index 5d36dcd06f60..31afee8344a5 100644
--- a/include/linux/packing.h
+++ b/include/linux/packing.h
@@ -26,4 +26,77 @@ int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen,
int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit,
size_t pbuflen, u8 quirks);
+#define GEN_PACKED_FIELD_MEMBERS(__type) \
+ __type startbit; \
+ __type endbit; \
+ __type offset; \
+ __type size;
+
+/* Small packed field. Use with bit offsets < 256, buffers < 32B and
+ * unpacked structures < 256B.
+ */
+struct packed_field_s {
+ GEN_PACKED_FIELD_MEMBERS(u8);
+};
+
+/* Medium packed field. Use with bit offsets < 65536, buffers < 8KB and
+ * unpacked structures < 64KB.
+ */
+struct packed_field_m {
+ GEN_PACKED_FIELD_MEMBERS(u16);
+};
+
+#define PACKED_FIELD(start, end, struct_name, struct_field) \
+ { \
+ (start), \
+ (end), \
+ offsetof(struct_name, struct_field), \
+ sizeof_field(struct_name, struct_field), \
+ }
+
+#if IS_ENABLED(CONFIG_PACKING_CHECK_FIELDS)
+
+#define CHECK_PACKED_FIELD(field, pbuflen) \
+ ({ typeof(field) __f = (field); typeof(pbuflen) __len = (pbuflen); \
+ BUILD_BUG_ON(__f.startbit < __f.endbit); \
+ BUILD_BUG_ON(__f.startbit >= BITS_PER_BYTE * __len); \
+ BUILD_BUG_ON(__f.startbit - __f.endbit >= BITS_PER_BYTE * __f.size); \
+ BUILD_BUG_ON(__f.size != 1 && __f.size != 2 && __f.size != 4 && __f.size != 8); })
+
+#define CHECK_PACKED_FIELD_OVERLAP(field1, field2) \
+ ({ typeof(field1) _f1 = (field1); typeof(field2) _f2 = (field2); \
+ BUILD_BUG_ON(max(_f1.endbit, _f2.endbit) <= min(_f1.startbit, _f2.startbit)); })
+
+#include <generated/packing-checks.h>
+
+#endif
+
+void pack_fields_s(void *pbuf, size_t pbuflen, const void *ustruct,
+ const struct packed_field_s *fields, size_t num_fields,
+ u8 quirks);
+
+void pack_fields_m(void *pbuf, size_t pbuflen, const void *ustruct,
+ const struct packed_field_m *fields, size_t num_fields,
+ u8 quirks);
+
+void unpack_fields_s(const void *pbuf, size_t pbuflen, void *ustruct,
+ const struct packed_field_s *fields, size_t num_fields,
+ u8 quirks);
+
+void unpack_fields_m(const void *pbuf, size_t pbuflen, void *ustruct,
+ const struct packed_field_m *fields, size_t num_fields,
+ u8 quirks);
+
+#define pack_fields(pbuf, pbuflen, ustruct, fields, quirks) \
+ _Generic((fields), \
+ const struct packed_field_s * : pack_fields_s, \
+ const struct packed_field_m * : pack_fields_m \
+ )(pbuf, pbuflen, ustruct, fields, ARRAY_SIZE(fields), quirks)
+
+#define unpack_fields(pbuf, pbuflen, ustruct, fields, quirks) \
+ _Generic((fields), \
+ const struct packed_field_s * : unpack_fields_s, \
+ const struct packed_field_m * : unpack_fields_m \
+ )(pbuf, pbuflen, ustruct, fields, ARRAY_SIZE(fields), quirks)
+
#endif
diff --git a/lib/gen_packing_checks.c b/lib/gen_packing_checks.c
new file mode 100644
index 000000000000..5ff346a190c0
--- /dev/null
+++ b/lib/gen_packing_checks.c
@@ -0,0 +1,193 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdbool.h>
+#include <stdio.h>
+
+static bool generate_checks[51];
+
+static void parse_defines(void)
+{
+#ifdef PACKING_CHECK_FIELDS_1
+ generate_checks[1] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_2
+ generate_checks[2] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_3
+ generate_checks[3] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_4
+ generate_checks[4] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_5
+ generate_checks[5] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_6
+ generate_checks[6] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_7
+ generate_checks[7] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_8
+ generate_checks[8] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_9
+ generate_checks[9] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_10
+ generate_checks[10] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_11
+ generate_checks[11] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_12
+ generate_checks[12] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_13
+ generate_checks[13] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_14
+ generate_checks[14] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_15
+ generate_checks[15] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_16
+ generate_checks[16] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_17
+ generate_checks[17] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_18
+ generate_checks[18] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_19
+ generate_checks[19] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_20
+ generate_checks[20] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_21
+ generate_checks[21] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_22
+ generate_checks[22] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_23
+ generate_checks[23] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_24
+ generate_checks[24] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_25
+ generate_checks[25] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_26
+ generate_checks[26] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_27
+ generate_checks[27] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_28
+ generate_checks[28] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_29
+ generate_checks[29] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_30
+ generate_checks[30] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_31
+ generate_checks[31] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_32
+ generate_checks[32] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_33
+ generate_checks[33] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_34
+ generate_checks[34] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_35
+ generate_checks[35] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_36
+ generate_checks[36] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_37
+ generate_checks[37] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_38
+ generate_checks[38] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_39
+ generate_checks[39] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_40
+ generate_checks[40] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_41
+ generate_checks[41] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_42
+ generate_checks[42] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_43
+ generate_checks[43] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_44
+ generate_checks[44] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_45
+ generate_checks[45] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_46
+ generate_checks[46] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_47
+ generate_checks[47] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_48
+ generate_checks[48] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_49
+ generate_checks[49] = true;
+#endif
+#ifdef PACKING_CHECK_FIELDS_50
+ generate_checks[50] = true;
+#endif
+}
+
+int main(int argc, char **argv)
+{
+ parse_defines();
+
+ printf("/* Automatically generated - do not edit */\n\n");
+ printf("#ifndef GENERATED_PACKING_CHECKS_H\n");
+ printf("#define GENERATED_PACKING_CHECKS_H\n\n");
+
+ for (int i = 1; i <= 50; i++) {
+ if (!generate_checks[i])
+ continue;
+
+ printf("#define CHECK_PACKED_FIELDS_%d(fields, pbuflen) \\\n", i);
+ printf("\t({ typeof(&(fields)[0]) _f = (fields); typeof(pbuflen) _len = (pbuflen); \\\n");
+ printf("\tBUILD_BUG_ON(ARRAY_SIZE(fields) != %d); \\\n", i);
+ for (int j = 0; j < i; j++) {
+ bool final = (i == 1);
+
+ printf("\tCHECK_PACKED_FIELD(_f[%d], _len);%s\n",
+ j, final ? " })\n" : " \\");
+ }
+ for (int j = 1; j < i; j++) {
+ for (int k = 0; k < j; k++) {
+ bool final = (j == i - 1) && (k == j - 1);
+
+ printf("\tCHECK_PACKED_FIELD_OVERLAP(_f[%d], _f[%d]);%s\n",
+ k, j, final ? " })\n" : " \\");
+ }
+ }
+ }
+
+ printf("#endif /* GENERATED_PACKING_CHECKS_H */\n");
+}
diff --git a/lib/packing.c b/lib/packing.c
index 2bf81951dfc8..b7ca55269d0f 100644
--- a/lib/packing.c
+++ b/lib/packing.c
@@ -5,10 +5,37 @@
#include <linux/packing.h>
#include <linux/module.h>
#include <linux/bitops.h>
+#include <linux/bits.h>
#include <linux/errno.h>
#include <linux/types.h>
#include <linux/bitrev.h>
+#define __pack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks) \
+ ({ \
+ for (size_t i = 0; i < (num_fields); i++) { \
+ typeof(&(fields)[0]) field = &(fields)[i]; \
+ u64 uval; \
+ \
+ uval = ustruct_field_to_u64(ustruct, field->offset, field->size); \
+ \
+ __pack(pbuf, uval, field->startbit, field->endbit, \
+ pbuflen, quirks); \
+ } \
+ })
+
+#define __unpack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks) \
+ ({ \
+ for (size_t i = 0; i < (num_fields); i++) { \
+ typeof(&(fields)[0]) field = &fields[i]; \
+ u64 uval; \
+ \
+ __unpack(pbuf, &uval, field->startbit, field->endbit, \
+ pbuflen, quirks); \
+ \
+ u64_to_ustruct_field(ustruct, field->offset, field->size, uval); \
+ } \
+ })
+
/**
* calculate_box_addr - Determine physical location of byte in buffer
* @box: Index of byte within buffer seen as a logical big-endian big number
@@ -168,8 +195,8 @@ int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen,
}
EXPORT_SYMBOL(pack);
-static void __unpack(const void *pbuf, u64 *uval, size_t startbit,
- size_t endbit, size_t pbuflen, u8 quirks)
+static void __unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit,
+ size_t pbuflen, u8 quirks)
{
/* Logical byte indices corresponding to the
* start and end of the field.
@@ -322,4 +349,122 @@ int packing(void *pbuf, u64 *uval, int startbit, int endbit, size_t pbuflen,
}
EXPORT_SYMBOL(packing);
+static u64 ustruct_field_to_u64(const void *ustruct, size_t field_offset,
+ size_t field_size)
+{
+ switch (field_size) {
+ case 1:
+ return *((u8 *)(ustruct + field_offset));
+ case 2:
+ return *((u16 *)(ustruct + field_offset));
+ case 4:
+ return *((u32 *)(ustruct + field_offset));
+ default:
+ return *((u64 *)(ustruct + field_offset));
+ }
+}
+
+static void u64_to_ustruct_field(void *ustruct, size_t field_offset,
+ size_t field_size, u64 uval)
+{
+ switch (field_size) {
+ case 1:
+ *((u8 *)(ustruct + field_offset)) = uval;
+ break;
+ case 2:
+ *((u16 *)(ustruct + field_offset)) = uval;
+ break;
+ case 4:
+ *((u32 *)(ustruct + field_offset)) = uval;
+ break;
+ default:
+ *((u64 *)(ustruct + field_offset)) = uval;
+ break;
+ }
+}
+
+/**
+ * pack_fields_s - Pack array of small fields
+ *
+ * @pbuf: Pointer to a buffer holding the packed value.
+ * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf.
+ * @ustruct: Pointer to CPU-readable structure holding the unpacked value.
+ * It is expected (but not checked) that this has the same data type
+ * as all struct packed_field_s definitions.
+ * @fields: Array of small packed fields definition. They must not overlap.
+ * @num_fields: Length of @fields array.
+ * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and
+ * QUIRK_MSB_ON_THE_RIGHT.
+ */
+void pack_fields_s(void *pbuf, size_t pbuflen, const void *ustruct,
+ const struct packed_field_s *fields, size_t num_fields,
+ u8 quirks)
+{
+ __pack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks);
+}
+EXPORT_SYMBOL(pack_fields_s);
+
+/**
+ * pack_fields_m - Pack array of medium fields
+ *
+ * @pbuf: Pointer to a buffer holding the packed value.
+ * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf.
+ * @ustruct: Pointer to CPU-readable structure holding the unpacked value.
+ * It is expected (but not checked) that this has the same data type
+ * as all struct packed_field_s definitions.
+ * @fields: Array of medium packed fields definition. They must not overlap.
+ * @num_fields: Length of @fields array.
+ * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and
+ * QUIRK_MSB_ON_THE_RIGHT.
+ */
+void pack_fields_m(void *pbuf, size_t pbuflen, const void *ustruct,
+ const struct packed_field_m *fields, size_t num_fields,
+ u8 quirks)
+{
+ __pack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks);
+}
+EXPORT_SYMBOL(pack_fields_m);
+
+/**
+ * unpack_fields_s - Unpack array of small fields
+ *
+ * @pbuf: Pointer to a buffer holding the packed value.
+ * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf.
+ * @ustruct: Pointer to CPU-readable structure holding the unpacked value.
+ * It is expected (but not checked) that this has the same data type
+ * as all struct packed_field_s definitions.
+ * @fields: Array of small packed fields definition. They must not overlap.
+ * @num_fields: Length of @fields array.
+ * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and
+ * QUIRK_MSB_ON_THE_RIGHT.
+ */
+void unpack_fields_s(const void *pbuf, size_t pbuflen, void *ustruct,
+ const struct packed_field_s *fields, size_t num_fields,
+ u8 quirks)
+{
+ __unpack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks);
+}
+EXPORT_SYMBOL(unpack_fields_s);
+
+/**
+ * unpack_fields_m - Unpack array of medium fields
+ *
+ * @pbuf: Pointer to a buffer holding the packed value.
+ * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf.
+ * @ustruct: Pointer to CPU-readable structure holding the unpacked value.
+ * It is expected (but not checked) that this has the same data type
+ * as all struct packed_field_s definitions.
+ * @fields: Array of medium packed fields definition. They must not overlap.
+ * @num_fields: Length of @fields array.
+ * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and
+ * QUIRK_MSB_ON_THE_RIGHT.
+ */
+void unpack_fields_m(const void *pbuf, size_t pbuflen, void *ustruct,
+ const struct packed_field_m *fields, size_t num_fields,
+ u8 quirks)
+{
+ __unpack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks);
+}
+EXPORT_SYMBOL(unpack_fields_m);
+
MODULE_DESCRIPTION("Generic bitfield packing and unpacking");
diff --git a/Kbuild b/Kbuild
index 464b34a08f51..1a4d31b3eb6c 100644
--- a/Kbuild
+++ b/Kbuild
@@ -34,6 +34,172 @@ arch/$(SRCARCH)/kernel/asm-offsets.s: $(timeconst-file) $(bounds-file)
$(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FORCE
$(call filechk,offsets,__ASM_OFFSETS_H__)
+ifdef CONFIG_PACKING_CHECK_FIELDS
+
+# Generate packing-checks.h
+
+ifdef CONFIG_PACKING_CHECK_FIELDS_1
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_1
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_2
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_2
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_3
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_3
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_4
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_4
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_5
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_5
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_6
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_6
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_7
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_7
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_8
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_8
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_9
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_9
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_10
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_10
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_11
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_11
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_12
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_12
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_13
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_13
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_14
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_14
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_15
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_15
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_16
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_16
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_17
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_17
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_18
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_18
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_19
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_19
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_20
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_20
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_21
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_21
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_22
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_22
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_23
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_23
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_24
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_24
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_25
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_25
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_26
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_26
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_27
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_27
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_28
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_28
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_29
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_29
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_30
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_30
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_31
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_31
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_32
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_32
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_33
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_33
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_34
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_34
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_35
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_35
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_36
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_36
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_37
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_37
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_38
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_38
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_39
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_39
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_40
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_40
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_41
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_41
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_42
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_42
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_43
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_43
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_44
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_44
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_45
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_45
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_46
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_46
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_47
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_47
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_48
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_48
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_49
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_49
+endif
+ifdef CONFIG_PACKING_CHECK_FIELDS_50
+HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_50
+endif
+
+hostprogs += lib/gen_packing_checks
+
+packing-checks := include/generated/packing-checks.h
+
+filechk_gen_packing_checks = lib/gen_packing_checks
+
+$(packing-checks): lib/gen_packing_checks FORCE
+ $(call filechk,gen_packing_checks)
+
+endif
+
# Check for missing system calls
quiet_cmd_syscalls = CALL $<
@@ -70,7 +236,7 @@ $(atomic-checks): $(obj)/.checked-%: include/linux/atomic/% FORCE
# A phony target that depends on all the preparation targets
PHONY += prepare
-prepare: $(offsets-file) missing-syscalls $(atomic-checks)
+prepare: $(offsets-file) missing-syscalls $(atomic-checks) $(packing-checks)
@:
# Ordinary directory descending
diff --git a/lib/Kconfig b/lib/Kconfig
index 50d85f38b569..68b440d622f6 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -40,9 +40,11 @@ config PACKING
When in doubt, say N.
+if PACKING
+
config PACKING_KUNIT_TEST
tristate "KUnit tests for packing library" if !KUNIT_ALL_TESTS
- depends on PACKING && KUNIT
+ depends on KUNIT
default KUNIT_ALL_TESTS
help
This builds KUnit tests for the packing library.
@@ -52,6 +54,363 @@ config PACKING_KUNIT_TEST
When in doubt, say N.
+config PACKING_CHECK_FIELDS
+ bool
+ help
+ This option generates the <include/generated/packing-checks.h> file.
+
+config PACKING_CHECK_FIELDS_1
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 1 element.
+
+config PACKING_CHECK_FIELDS_2
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 2 elements.
+
+config PACKING_CHECK_FIELDS_3
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 3 elements.
+
+config PACKING_CHECK_FIELDS_4
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 4 elements.
+
+config PACKING_CHECK_FIELDS_5
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 5 elements.
+
+config PACKING_CHECK_FIELDS_6
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 6 elements.
+
+config PACKING_CHECK_FIELDS_7
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 7 elements.
+
+config PACKING_CHECK_FIELDS_8
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 8 elements.
+
+config PACKING_CHECK_FIELDS_9
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 9 elements.
+
+config PACKING_CHECK_FIELDS_10
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 10 elements.
+
+config PACKING_CHECK_FIELDS_11
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 11 elements.
+
+config PACKING_CHECK_FIELDS_12
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 12 elements.
+
+config PACKING_CHECK_FIELDS_13
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 13 elements.
+
+config PACKING_CHECK_FIELDS_14
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 14 elements.
+
+config PACKING_CHECK_FIELDS_15
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 15 elements.
+
+config PACKING_CHECK_FIELDS_16
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 16 elements.
+
+config PACKING_CHECK_FIELDS_17
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 17 elements.
+
+config PACKING_CHECK_FIELDS_18
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 18 elements.
+
+config PACKING_CHECK_FIELDS_19
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 19 elements.
+
+config PACKING_CHECK_FIELDS_20
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 20 elements.
+
+config PACKING_CHECK_FIELDS_21
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 21 elements.
+
+config PACKING_CHECK_FIELDS_22
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 22 elements.
+
+config PACKING_CHECK_FIELDS_23
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 23 elements.
+
+config PACKING_CHECK_FIELDS_24
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 24 elements.
+
+config PACKING_CHECK_FIELDS_25
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 25 elements.
+
+config PACKING_CHECK_FIELDS_26
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 26 elements.
+
+config PACKING_CHECK_FIELDS_27
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 27 elements.
+
+config PACKING_CHECK_FIELDS_28
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 28 elements.
+
+config PACKING_CHECK_FIELDS_29
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 29 elements.
+
+config PACKING_CHECK_FIELDS_30
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 30 elements.
+
+config PACKING_CHECK_FIELDS_31
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 31 elements.
+
+config PACKING_CHECK_FIELDS_32
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 32 elements.
+
+config PACKING_CHECK_FIELDS_33
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 33 elements.
+
+config PACKING_CHECK_FIELDS_34
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 34 elements.
+
+config PACKING_CHECK_FIELDS_35
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 35 elements.
+
+config PACKING_CHECK_FIELDS_36
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 36 elements.
+
+config PACKING_CHECK_FIELDS_37
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 37 elements.
+
+config PACKING_CHECK_FIELDS_38
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 38 elements.
+
+config PACKING_CHECK_FIELDS_39
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 39 elements.
+
+config PACKING_CHECK_FIELDS_40
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 40 elements.
+
+config PACKING_CHECK_FIELDS_41
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 41 elements.
+
+config PACKING_CHECK_FIELDS_42
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 42 elements.
+
+config PACKING_CHECK_FIELDS_43
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 43 elements.
+
+config PACKING_CHECK_FIELDS_44
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 44 elements.
+
+config PACKING_CHECK_FIELDS_45
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 45 elements.
+
+config PACKING_CHECK_FIELDS_46
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 46 elements.
+
+config PACKING_CHECK_FIELDS_47
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 47 elements.
+
+config PACKING_CHECK_FIELDS_48
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 48 elements.
+
+config PACKING_CHECK_FIELDS_49
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 49 elements.
+
+config PACKING_CHECK_FIELDS_50
+ bool
+ select PACKING_CHECK_FIELDS
+ help
+ This option generates compile-time checks for struct packed_field
+ arrays of 50 elements.
+
+endif # PACKING
+
config BITREVERSE
tristate
--
2.47.0.265.g4ca455297942
^ permalink raw reply related [flat|nested] 23+ messages in thread* Re: [PATCH net-next v2 3/9] lib: packing: add pack_fields() and unpack_fields()
2024-10-26 0:04 ` [PATCH net-next v2 3/9] lib: packing: add pack_fields() and unpack_fields() Jacob Keller
@ 2024-11-03 18:31 ` Jakub Kicinski
2024-11-04 19:06 ` Jacob Keller
0 siblings, 1 reply; 23+ messages in thread
From: Jakub Kicinski @ 2024-11-03 18:31 UTC (permalink / raw)
To: Jacob Keller
Cc: Vladimir Oltean, Andrew Morton, Eric Dumazet, Paolo Abeni,
Tony Nguyen, Przemek Kitszel, Masahiro Yamada, linux-kernel,
netdev, Vladimir Oltean
On Fri, 25 Oct 2024 17:04:55 -0700 Jacob Keller wrote:
> +ifdef CONFIG_PACKING_CHECK_FIELDS_1
> +HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_1
> +endif
> +ifdef CONFIG_PACKING_CHECK_FIELDS_2
> +HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_2
> +endif
[...]
> +ifdef CONFIG_PACKING_CHECK_FIELDS_49
> +HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_49
> +endif
> +ifdef CONFIG_PACKING_CHECK_FIELDS_50
> +HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_50
This series is marked as Not Applicable in PW. Not sure why.
I can't bring myself to revive it, tho, this isn't pretty.
It'd be one thing to do the codegen and the ugly copy / paste
50 times in the lib/ but all drivers have to select all field
counts they use..
Since all you want is compile time checking and logic is quite
well constrained - can we put the field definitions in a separate
ro section and make modpost validate them?
Also Documentation needs to be extended with basic use examples.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v2 3/9] lib: packing: add pack_fields() and unpack_fields()
2024-11-03 18:31 ` Jakub Kicinski
@ 2024-11-04 19:06 ` Jacob Keller
0 siblings, 0 replies; 23+ messages in thread
From: Jacob Keller @ 2024-11-04 19:06 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Vladimir Oltean, Andrew Morton, Eric Dumazet, Paolo Abeni,
Tony Nguyen, Przemek Kitszel, Masahiro Yamada, linux-kernel,
netdev, Vladimir Oltean
On 11/3/2024 10:31 AM, Jakub Kicinski wrote:
> On Fri, 25 Oct 2024 17:04:55 -0700 Jacob Keller wrote:
>> +ifdef CONFIG_PACKING_CHECK_FIELDS_1
>> +HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_1
>> +endif
>> +ifdef CONFIG_PACKING_CHECK_FIELDS_2
>> +HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_2
>> +endif
> [...]
>> +ifdef CONFIG_PACKING_CHECK_FIELDS_49
>> +HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_49
>> +endif
>> +ifdef CONFIG_PACKING_CHECK_FIELDS_50
>> +HOSTCFLAGS_lib/gen_packing_checks.o += -DPACKING_CHECK_FIELDS_50
>
> This series is marked as Not Applicable in PW. Not sure why.
>
> I can't bring myself to revive it, tho, this isn't pretty.
> It'd be one thing to do the codegen and the ugly copy / paste
> 50 times in the lib/ but all drivers have to select all field
> counts they use..
>
> Since all you want is compile time checking and logic is quite
> well constrained - can we put the field definitions in a separate
> ro section and make modpost validate them?
>
This is a much better idea actually. I think I can figure that out.
> Also Documentation needs to be extended with basic use examples.
Will fix in v3
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v2 4/9] ice: remove int_q_state from ice_tlan_ctx
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
` (2 preceding siblings ...)
2024-10-26 0:04 ` [PATCH net-next v2 3/9] lib: packing: add pack_fields() and unpack_fields() Jacob Keller
@ 2024-10-26 0:04 ` Jacob Keller
2024-10-29 14:36 ` Daniel Machon
2024-10-26 0:04 ` [PATCH net-next v2 5/9] ice: use structures to keep track of queue context size Jacob Keller
` (6 subsequent siblings)
10 siblings, 1 reply; 23+ messages in thread
From: Jacob Keller @ 2024-10-26 0:04 UTC (permalink / raw)
To: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada
Cc: linux-kernel, netdev, Jacob Keller
The int_q_state field of the ice_tlan_ctx structure represents the internal
queue state. However, we never actually need to assign this or read this
during normal operation. In fact, trying to unpack it would not be possible
as it is larger than a u64. Remove this field from the ice_tlan_ctx
structure, and remove its packing field from the ice_tlan_ctx_info array.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 1 -
drivers/net/ethernet/intel/ice/ice_common.c | 1 -
2 files changed, 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
index 611577ebc29d..0e8ed8c226e6 100644
--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
@@ -590,7 +590,6 @@ struct ice_tlan_ctx {
u8 drop_ena;
u8 cache_prof_idx;
u8 pkt_shaper_prof_idx;
- u8 int_q_state; /* width not needed - internal - DO NOT WRITE!!! */
};
#endif /* _ICE_LAN_TX_RX_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index b22e71dc59d4..0f5a80269a7b 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -1467,7 +1467,6 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = {
ICE_CTX_STORE(ice_tlan_ctx, drop_ena, 1, 165),
ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx, 2, 166),
ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx, 3, 168),
- ICE_CTX_STORE(ice_tlan_ctx, int_q_state, 122, 171),
{ 0 }
};
--
2.47.0.265.g4ca455297942
^ permalink raw reply related [flat|nested] 23+ messages in thread* Re: [PATCH net-next v2 4/9] ice: remove int_q_state from ice_tlan_ctx
2024-10-26 0:04 ` [PATCH net-next v2 4/9] ice: remove int_q_state from ice_tlan_ctx Jacob Keller
@ 2024-10-29 14:36 ` Daniel Machon
0 siblings, 0 replies; 23+ messages in thread
From: Daniel Machon @ 2024-10-29 14:36 UTC (permalink / raw)
To: Jacob Keller
Cc: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada,
linux-kernel, netdev
> The int_q_state field of the ice_tlan_ctx structure represents the internal
> queue state. However, we never actually need to assign this or read this
> during normal operation. In fact, trying to unpack it would not be possible
> as it is larger than a u64. Remove this field from the ice_tlan_ctx
> structure, and remove its packing field from the ice_tlan_ctx_info array.
>
Reviewed-by: Daniel Machon <daniel.machon@microchip.com>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v2 5/9] ice: use structures to keep track of queue context size
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
` (3 preceding siblings ...)
2024-10-26 0:04 ` [PATCH net-next v2 4/9] ice: remove int_q_state from ice_tlan_ctx Jacob Keller
@ 2024-10-26 0:04 ` Jacob Keller
2024-10-29 14:42 ` Daniel Machon
2024-10-26 0:04 ` [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data Jacob Keller
` (5 subsequent siblings)
10 siblings, 1 reply; 23+ messages in thread
From: Jacob Keller @ 2024-10-26 0:04 UTC (permalink / raw)
To: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada
Cc: linux-kernel, netdev, Jacob Keller, Vladimir Oltean
The ice Tx and Rx queue context are currently stored as arrays of bytes
with defined size (ICE_RXQ_CTX_SZ and ICE_TXQ_CTX_SZ). The packed queue
context is often passed to other functions as a simple u8 * pointer, which
does not allow tracking the size. This makes the queue context API easy to
misuse, as you can pass an arbitrary u8 array or pointer.
Introduce wrapper typedefs which use a __packed structure that has the
proper fixed size for the Tx and Rx context buffers. This enables the
compiler to track the size of the value and ensures that passing the wrong
buffer size will be detected by the compiler.
The existing APIs do not benefit much from this change, however the
wrapping structures will be used to simplify the arguments of new packing
functions based on the recently introduced pack_fields API.
Co-developed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
---
drivers/net/ethernet/intel/ice/ice_adminq_cmd.h | 11 +++++++++--
drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 2 --
drivers/net/ethernet/intel/ice/ice_base.c | 2 +-
drivers/net/ethernet/intel/ice/ice_common.c | 24 +++++++++++-------------
4 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 1f01f3501d6b..a0ec9c97c2d7 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -12,6 +12,13 @@
#define ICE_AQC_TOPO_MAX_LEVEL_NUM 0x9
#define ICE_AQ_SET_MAC_FRAME_SIZE_MAX 9728
+#define ICE_RXQ_CTX_SIZE_DWORDS 8
+#define ICE_RXQ_CTX_SZ (ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32))
+#define ICE_TXQ_CTX_SZ 22
+
+typedef struct __packed { u8 buf[ICE_RXQ_CTX_SZ]; } ice_rxq_ctx_buf_t;
+typedef struct __packed { u8 buf[ICE_TXQ_CTX_SZ]; } ice_txq_ctx_buf_t;
+
struct ice_aqc_generic {
__le32 param0;
__le32 param1;
@@ -2067,10 +2074,10 @@ struct ice_aqc_add_txqs_perq {
__le16 txq_id;
u8 rsvd[2];
__le32 q_teid;
- u8 txq_ctx[22];
+ ice_txq_ctx_buf_t txq_ctx;
u8 rsvd2[2];
struct ice_aqc_txsched_elem info;
-};
+} __packed;
/* The format of the command buffer for Add Tx LAN Queues (0x0C30)
* is an array of the following structs. Please note that the length of
diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
index 0e8ed8c226e6..a76e5b0e7861 100644
--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
@@ -371,8 +371,6 @@ enum ice_rx_flex_desc_status_error_1_bits {
ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
};
-#define ICE_RXQ_CTX_SIZE_DWORDS 8
-#define ICE_RXQ_CTX_SZ (ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32))
#define ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS 22
#define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS 5
#define GLTCLAN_CQ_CNTX(i, CQ) (GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800))
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index 3a8e156d7d86..260942877968 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -909,7 +909,7 @@ ice_vsi_cfg_txq(struct ice_vsi *vsi, struct ice_tx_ring *ring,
ice_setup_tx_ctx(ring, &tlan_ctx, pf_q);
/* copy context contents into the qg_buf */
qg_buf->txqs[0].txq_id = cpu_to_le16(pf_q);
- ice_set_ctx(hw, (u8 *)&tlan_ctx, qg_buf->txqs[0].txq_ctx,
+ ice_set_ctx(hw, (u8 *)&tlan_ctx, (u8 *)&qg_buf->txqs[0].txq_ctx,
ice_tlan_ctx_info);
/* init queue specific tail reg. It is referred as
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 0f5a80269a7b..48d95cb49864 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -1359,29 +1359,27 @@ int ice_reset(struct ice_hw *hw, enum ice_reset_req req)
/**
* ice_copy_rxq_ctx_to_hw
* @hw: pointer to the hardware structure
- * @ice_rxq_ctx: pointer to the rxq context
+ * @rxq_ctx: pointer to the packed Rx queue context
* @rxq_index: the index of the Rx queue
*
* Copies rxq context from dense structure to HW register space
*/
-static int
-ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index)
+static int ice_copy_rxq_ctx_to_hw(struct ice_hw *hw,
+ const ice_rxq_ctx_buf_t *rxq_ctx,
+ u32 rxq_index)
{
u8 i;
- if (!ice_rxq_ctx)
- return -EINVAL;
-
if (rxq_index > QRX_CTRL_MAX_INDEX)
return -EINVAL;
/* Copy each dword separately to HW */
for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
- wr32(hw, QRX_CONTEXT(i, rxq_index),
- *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+ u32 ctx = ((const u32 *)rxq_ctx)[i];
- ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i,
- *((u32 *)(ice_rxq_ctx + (i * sizeof(u32)))));
+ wr32(hw, QRX_CONTEXT(i, rxq_index), ctx);
+
+ ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i, ctx);
}
return 0;
@@ -1426,15 +1424,15 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
u32 rxq_index)
{
- u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
+ ice_rxq_ctx_buf_t buf = {};
if (!rlan_ctx)
return -EINVAL;
rlan_ctx->prefena = 1;
- ice_set_ctx(hw, (u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
- return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
+ ice_set_ctx(hw, (u8 *)rlan_ctx, (u8 *)&buf, ice_rlan_ctx_info);
+ return ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index);
}
/* LAN Tx Queue Context */
--
2.47.0.265.g4ca455297942
^ permalink raw reply related [flat|nested] 23+ messages in thread* Re: [PATCH net-next v2 5/9] ice: use structures to keep track of queue context size
2024-10-26 0:04 ` [PATCH net-next v2 5/9] ice: use structures to keep track of queue context size Jacob Keller
@ 2024-10-29 14:42 ` Daniel Machon
0 siblings, 0 replies; 23+ messages in thread
From: Daniel Machon @ 2024-10-29 14:42 UTC (permalink / raw)
To: Jacob Keller
Cc: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada,
linux-kernel, netdev, Vladimir Oltean
> The ice Tx and Rx queue context are currently stored as arrays of bytes
> with defined size (ICE_RXQ_CTX_SZ and ICE_TXQ_CTX_SZ). The packed queue
> context is often passed to other functions as a simple u8 * pointer, which
> does not allow tracking the size. This makes the queue context API easy to
> misuse, as you can pass an arbitrary u8 array or pointer.
>
> Introduce wrapper typedefs which use a __packed structure that has the
> proper fixed size for the Tx and Rx context buffers. This enables the
> compiler to track the size of the value and ensures that passing the wrong
> buffer size will be detected by the compiler.
>
> The existing APIs do not benefit much from this change, however the
> wrapping structures will be used to simplify the arguments of new packing
> functions based on the recently introduced pack_fields API.
>
Reviewed-by: Daniel Machon <daniel.machon@microchip.com>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
` (4 preceding siblings ...)
2024-10-26 0:04 ` [PATCH net-next v2 5/9] ice: use structures to keep track of queue context size Jacob Keller
@ 2024-10-26 0:04 ` Jacob Keller
2024-10-29 14:50 ` Daniel Machon
2024-10-26 0:04 ` [PATCH net-next v2 7/9] ice: reduce size of queue context fields Jacob Keller
` (4 subsequent siblings)
10 siblings, 1 reply; 23+ messages in thread
From: Jacob Keller @ 2024-10-26 0:04 UTC (permalink / raw)
To: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada
Cc: linux-kernel, netdev, Jacob Keller
The ice driver needs to write the Tx and Rx queue context when programming
Tx and Rx queues. This is currently done using some bespoke custom logic
via the ice_set_ctx() and its helper functions, along with bit position
definitions in the ice_tlan_ctx_info and ice_rlan_ctx_info structures.
This logic does work, but is problematic for several reasons:
1) ice_set_ctx requires a helper function for each byte size being packed,
as it uses a separate function to pack u8, u16, u32, and u64 fields.
This requires 4 functions which contain near-duplicate logic with the
types changed out.
2) The logic in the ice_pack_ctx_word, ice_pack_ctx_dword, and
ice_pack_ctx_qword does not handle values which straddle alignment
boundaries very well. This requires that several fields in the
ice_tlan_ctx_info and ice_rlan_ctx_info be a size larger than their bit
size should require.
3) Future support for live migration will require adding unpacking
functions to take the packed hardware context and unpack it into the
ice_rlan_ctx and ice_tlan_ctx structures. Implementing this would
require implementing ice_get_ctx, and its associated helper functions,
which essentially doubles the amount of code required.
The Linux kernel has had a packing library that can handle this logic since
commit 554aae35007e ("lib: Add support for generic packing operations").
The library was recently extended with support for packing or unpacking an
array of fields, with a similar structure as the ice_ctx_ele structure.
Replace the ice-specific ice_set_ctx() logic with the recently added
pack_fields and packed_field_s infrastructure from <linux/packing.h>
For API simplicity, the Tx and Rx queue context are programmed using
separate ice_pack_txq_ctx() and ice_pack_rxq_ctx(). This avoids needing to
export the packed_field_s arrays. The functions can pointers to the
appropriate ice_txq_ctx_buf_t and ice_rxq_ctx_buf_t types, ensuring that
only buffers of the appropriate size are passed.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
---
drivers/net/ethernet/intel/ice/ice_common.h | 5 +-
drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 14 --
drivers/net/ethernet/intel/ice/ice_base.c | 3 +-
drivers/net/ethernet/intel/ice/ice_common.c | 249 +++++--------------------
drivers/net/ethernet/intel/Kconfig | 3 +
5 files changed, 50 insertions(+), 224 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
index 27208a60cece..a68bea3934e3 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.h
+++ b/drivers/net/ethernet/intel/ice/ice_common.h
@@ -92,9 +92,8 @@ ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
int ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
-extern const struct ice_ctx_ele ice_tlan_ctx_info[];
-int ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx,
- const struct ice_ctx_ele *ce_info);
+
+void ice_pack_txq_ctx(const struct ice_tlan_ctx *ctx, ice_txq_ctx_buf_t *buf);
extern struct mutex ice_global_cfg_lock_sw;
diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
index a76e5b0e7861..31d4a445d640 100644
--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
@@ -408,20 +408,6 @@ struct ice_rlan_ctx {
u8 prefena; /* NOTE: normally must be set to 1 at init */
};
-struct ice_ctx_ele {
- u16 offset;
- u16 size_of;
- u16 width;
- u16 lsb;
-};
-
-#define ICE_CTX_STORE(_struct, _ele, _width, _lsb) { \
- .offset = offsetof(struct _struct, _ele), \
- .size_of = sizeof_field(struct _struct, _ele), \
- .width = _width, \
- .lsb = _lsb, \
-}
-
/* for hsplit_0 field of Rx RLAN context */
enum ice_rlan_ctx_rx_hsplit_0 {
ICE_RLAN_RX_HSPLIT_0_NO_SPLIT = 0,
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index 260942877968..0a325dec804e 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -909,8 +909,7 @@ ice_vsi_cfg_txq(struct ice_vsi *vsi, struct ice_tx_ring *ring,
ice_setup_tx_ctx(ring, &tlan_ctx, pf_q);
/* copy context contents into the qg_buf */
qg_buf->txqs[0].txq_id = cpu_to_le16(pf_q);
- ice_set_ctx(hw, (u8 *)&tlan_ctx, (u8 *)&qg_buf->txqs[0].txq_ctx,
- ice_tlan_ctx_info);
+ ice_pack_txq_ctx(&tlan_ctx, &qg_buf->txqs[0].txq_ctx);
/* init queue specific tail reg. It is referred as
* transmit comm scheduler queue doorbell.
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 48d95cb49864..905f5c745a7b 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -6,6 +6,7 @@
#include "ice_adminq_cmd.h"
#include "ice_flow.h"
#include "ice_ptp_hw.h"
+#include <linux/packing.h>
#define ICE_PF_RESET_WAIT_COUNT 300
#define ICE_MAX_NETLIST_SIZE 10
@@ -1385,9 +1386,12 @@ static int ice_copy_rxq_ctx_to_hw(struct ice_hw *hw,
return 0;
}
+#define ICE_CTX_STORE(struct_name, struct_field, width, lsb) \
+ PACKED_FIELD((lsb) + (width) - 1, (lsb), struct struct_name, struct_field)
+
/* LAN Rx Queue Context */
-static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
- /* Field Width LSB */
+static const struct packed_field_s ice_rlan_ctx_fields[] = {
+ /* Field Width LSB */
ICE_CTX_STORE(ice_rlan_ctx, head, 13, 0),
ICE_CTX_STORE(ice_rlan_ctx, cpuid, 8, 13),
ICE_CTX_STORE(ice_rlan_ctx, base, 57, 32),
@@ -1408,9 +1412,26 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena, 1, 196),
ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh, 3, 198),
ICE_CTX_STORE(ice_rlan_ctx, prefena, 1, 201),
- { 0 }
};
+/**
+ * ice_pack_rxq_ctx - Pack Rx queue context into a HW buffer
+ * @ctx: the Rx queue context to pack
+ * @buf: the HW buffer to pack into
+ *
+ * Pack the Rx queue context from the CPU-friendly unpacked buffer into its
+ * bit-packed HW layout.
+ */
+static void ice_pack_rxq_ctx(const struct ice_rlan_ctx *ctx,
+ ice_rxq_ctx_buf_t *buf)
+{
+ CHECK_PACKED_FIELDS_20(ice_rlan_ctx_fields, ICE_RXQ_CTX_SZ);
+ BUILD_BUG_ON(sizeof(*buf) != ICE_RXQ_CTX_SZ);
+
+ pack_fields(buf, sizeof(*buf), ctx, ice_rlan_ctx_fields,
+ QUIRK_LITTLE_ENDIAN | QUIRK_LSW32_IS_FIRST);
+}
+
/**
* ice_write_rxq_ctx
* @hw: pointer to the hardware structure
@@ -1431,12 +1452,13 @@ int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
rlan_ctx->prefena = 1;
- ice_set_ctx(hw, (u8 *)rlan_ctx, (u8 *)&buf, ice_rlan_ctx_info);
+ ice_pack_rxq_ctx(rlan_ctx, &buf);
+
return ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index);
}
/* LAN Tx Queue Context */
-const struct ice_ctx_ele ice_tlan_ctx_info[] = {
+static const struct packed_field_s ice_tlan_ctx_fields[] = {
/* Field Width LSB */
ICE_CTX_STORE(ice_tlan_ctx, base, 57, 0),
ICE_CTX_STORE(ice_tlan_ctx, port_num, 3, 57),
@@ -1465,9 +1487,25 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = {
ICE_CTX_STORE(ice_tlan_ctx, drop_ena, 1, 165),
ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx, 2, 166),
ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx, 3, 168),
- { 0 }
};
+/**
+ * ice_pack_txq_ctx - Pack Tx queue context into a HW buffer
+ * @ctx: the Tx queue context to pack
+ * @buf: the HW buffer to pack into
+ *
+ * Pack the Tx queue context from the CPU-friendly unpacked buffer into its
+ * bit-packed HW layout.
+ */
+void ice_pack_txq_ctx(const struct ice_tlan_ctx *ctx, ice_txq_ctx_buf_t *buf)
+{
+ CHECK_PACKED_FIELDS_27(ice_tlan_ctx_fields, ICE_TXQ_CTX_SZ);
+ BUILD_BUG_ON(sizeof(*buf) != ICE_TXQ_CTX_SZ);
+
+ pack_fields(buf, sizeof(*buf), ctx, ice_tlan_ctx_fields,
+ QUIRK_LITTLE_ENDIAN | QUIRK_LSW32_IS_FIRST);
+}
+
/* Sideband Queue command wrappers */
/**
@@ -4545,205 +4583,6 @@ ice_aq_add_rdma_qsets(struct ice_hw *hw, u8 num_qset_grps,
/* End of FW Admin Queue command wrappers */
-/**
- * ice_pack_ctx_byte - write a byte to a packed context structure
- * @src_ctx: unpacked source context structure
- * @dest_ctx: packed destination context data
- * @ce_info: context element description
- */
-static void ice_pack_ctx_byte(u8 *src_ctx, u8 *dest_ctx,
- const struct ice_ctx_ele *ce_info)
-{
- u8 src_byte, dest_byte, mask;
- u8 *from, *dest;
- u16 shift_width;
-
- /* copy from the next struct field */
- from = src_ctx + ce_info->offset;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
- mask = GENMASK(ce_info->width - 1 + shift_width, shift_width);
-
- src_byte = *from;
- src_byte <<= shift_width;
- src_byte &= mask;
-
- /* get the current bits from the target bit string */
- dest = dest_ctx + (ce_info->lsb / 8);
-
- memcpy(&dest_byte, dest, sizeof(dest_byte));
-
- dest_byte &= ~mask; /* get the bits not changing */
- dest_byte |= src_byte; /* add in the new bits */
-
- /* put it all back */
- memcpy(dest, &dest_byte, sizeof(dest_byte));
-}
-
-/**
- * ice_pack_ctx_word - write a word to a packed context structure
- * @src_ctx: unpacked source context structure
- * @dest_ctx: packed destination context data
- * @ce_info: context element description
- */
-static void ice_pack_ctx_word(u8 *src_ctx, u8 *dest_ctx,
- const struct ice_ctx_ele *ce_info)
-{
- u16 src_word, mask;
- __le16 dest_word;
- u8 *from, *dest;
- u16 shift_width;
-
- /* copy from the next struct field */
- from = src_ctx + ce_info->offset;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
- mask = GENMASK(ce_info->width - 1 + shift_width, shift_width);
-
- /* don't swizzle the bits until after the mask because the mask bits
- * will be in a different bit position on big endian machines
- */
- src_word = *(u16 *)from;
- src_word <<= shift_width;
- src_word &= mask;
-
- /* get the current bits from the target bit string */
- dest = dest_ctx + (ce_info->lsb / 8);
-
- memcpy(&dest_word, dest, sizeof(dest_word));
-
- dest_word &= ~(cpu_to_le16(mask)); /* get the bits not changing */
- dest_word |= cpu_to_le16(src_word); /* add in the new bits */
-
- /* put it all back */
- memcpy(dest, &dest_word, sizeof(dest_word));
-}
-
-/**
- * ice_pack_ctx_dword - write a dword to a packed context structure
- * @src_ctx: unpacked source context structure
- * @dest_ctx: packed destination context data
- * @ce_info: context element description
- */
-static void ice_pack_ctx_dword(u8 *src_ctx, u8 *dest_ctx,
- const struct ice_ctx_ele *ce_info)
-{
- u32 src_dword, mask;
- __le32 dest_dword;
- u8 *from, *dest;
- u16 shift_width;
-
- /* copy from the next struct field */
- from = src_ctx + ce_info->offset;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
- mask = GENMASK(ce_info->width - 1 + shift_width, shift_width);
-
- /* don't swizzle the bits until after the mask because the mask bits
- * will be in a different bit position on big endian machines
- */
- src_dword = *(u32 *)from;
- src_dword <<= shift_width;
- src_dword &= mask;
-
- /* get the current bits from the target bit string */
- dest = dest_ctx + (ce_info->lsb / 8);
-
- memcpy(&dest_dword, dest, sizeof(dest_dword));
-
- dest_dword &= ~(cpu_to_le32(mask)); /* get the bits not changing */
- dest_dword |= cpu_to_le32(src_dword); /* add in the new bits */
-
- /* put it all back */
- memcpy(dest, &dest_dword, sizeof(dest_dword));
-}
-
-/**
- * ice_pack_ctx_qword - write a qword to a packed context structure
- * @src_ctx: unpacked source context structure
- * @dest_ctx: packed destination context data
- * @ce_info: context element description
- */
-static void ice_pack_ctx_qword(u8 *src_ctx, u8 *dest_ctx,
- const struct ice_ctx_ele *ce_info)
-{
- u64 src_qword, mask;
- __le64 dest_qword;
- u8 *from, *dest;
- u16 shift_width;
-
- /* copy from the next struct field */
- from = src_ctx + ce_info->offset;
-
- /* prepare the bits and mask */
- shift_width = ce_info->lsb % 8;
- mask = GENMASK_ULL(ce_info->width - 1 + shift_width, shift_width);
-
- /* don't swizzle the bits until after the mask because the mask bits
- * will be in a different bit position on big endian machines
- */
- src_qword = *(u64 *)from;
- src_qword <<= shift_width;
- src_qword &= mask;
-
- /* get the current bits from the target bit string */
- dest = dest_ctx + (ce_info->lsb / 8);
-
- memcpy(&dest_qword, dest, sizeof(dest_qword));
-
- dest_qword &= ~(cpu_to_le64(mask)); /* get the bits not changing */
- dest_qword |= cpu_to_le64(src_qword); /* add in the new bits */
-
- /* put it all back */
- memcpy(dest, &dest_qword, sizeof(dest_qword));
-}
-
-/**
- * ice_set_ctx - set context bits in packed structure
- * @hw: pointer to the hardware structure
- * @src_ctx: pointer to a generic non-packed context structure
- * @dest_ctx: pointer to memory for the packed structure
- * @ce_info: List of Rx context elements
- */
-int ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx,
- const struct ice_ctx_ele *ce_info)
-{
- int f;
-
- for (f = 0; ce_info[f].width; f++) {
- /* We have to deal with each element of the FW response
- * using the correct size so that we are correct regardless
- * of the endianness of the machine.
- */
- if (ce_info[f].width > (ce_info[f].size_of * BITS_PER_BYTE)) {
- ice_debug(hw, ICE_DBG_QCTX, "Field %d width of %d bits larger than size of %d byte(s) ... skipping write\n",
- f, ce_info[f].width, ce_info[f].size_of);
- continue;
- }
- switch (ce_info[f].size_of) {
- case sizeof(u8):
- ice_pack_ctx_byte(src_ctx, dest_ctx, &ce_info[f]);
- break;
- case sizeof(u16):
- ice_pack_ctx_word(src_ctx, dest_ctx, &ce_info[f]);
- break;
- case sizeof(u32):
- ice_pack_ctx_dword(src_ctx, dest_ctx, &ce_info[f]);
- break;
- case sizeof(u64):
- ice_pack_ctx_qword(src_ctx, dest_ctx, &ce_info[f]);
- break;
- default:
- return -EINVAL;
- }
- }
-
- return 0;
-}
-
/**
* ice_get_lan_q_ctx - get the LAN queue context for the given VSI and TC
* @hw: pointer to the HW struct
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index 20bc40eec487..c4ea8ae65a95 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -292,6 +292,9 @@ config ICE
select DIMLIB
select LIBIE
select NET_DEVLINK
+ select PACKING
+ select PACKING_CHECK_FIELDS_20
+ select PACKING_CHECK_FIELDS_27
select PLDMFW
select DPLL
help
--
2.47.0.265.g4ca455297942
^ permalink raw reply related [flat|nested] 23+ messages in thread* Re: [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data
2024-10-26 0:04 ` [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data Jacob Keller
@ 2024-10-29 14:50 ` Daniel Machon
2024-10-29 22:09 ` Jacob Keller
0 siblings, 1 reply; 23+ messages in thread
From: Daniel Machon @ 2024-10-29 14:50 UTC (permalink / raw)
To: Jacob Keller
Cc: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada,
linux-kernel, netdev
Hi Jacob,
> The ice driver needs to write the Tx and Rx queue context when programming
> Tx and Rx queues. This is currently done using some bespoke custom logic
> via the ice_set_ctx() and its helper functions, along with bit position
> definitions in the ice_tlan_ctx_info and ice_rlan_ctx_info structures.
>
> This logic does work, but is problematic for several reasons:
>
> 1) ice_set_ctx requires a helper function for each byte size being packed,
> as it uses a separate function to pack u8, u16, u32, and u64 fields.
> This requires 4 functions which contain near-duplicate logic with the
> types changed out.
>
> 2) The logic in the ice_pack_ctx_word, ice_pack_ctx_dword, and
> ice_pack_ctx_qword does not handle values which straddle alignment
> boundaries very well. This requires that several fields in the
> ice_tlan_ctx_info and ice_rlan_ctx_info be a size larger than their bit
> size should require.
>
> 3) Future support for live migration will require adding unpacking
> functions to take the packed hardware context and unpack it into the
> ice_rlan_ctx and ice_tlan_ctx structures. Implementing this would
> require implementing ice_get_ctx, and its associated helper functions,
> which essentially doubles the amount of code required.
>
> The Linux kernel has had a packing library that can handle this logic since
> commit 554aae35007e ("lib: Add support for generic packing operations").
> The library was recently extended with support for packing or unpacking an
> array of fields, with a similar structure as the ice_ctx_ele structure.
>
> Replace the ice-specific ice_set_ctx() logic with the recently added
> pack_fields and packed_field_s infrastructure from <linux/packing.h>
>
> For API simplicity, the Tx and Rx queue context are programmed using
> separate ice_pack_txq_ctx() and ice_pack_rxq_ctx(). This avoids needing to
> export the packed_field_s arrays. The functions can pointers to the
> appropriate ice_txq_ctx_buf_t and ice_rxq_ctx_buf_t types, ensuring that
> only buffers of the appropriate size are passed.
>
> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
> ---
> drivers/net/ethernet/intel/ice/ice_common.h | 5 +-
> drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 14 --
> drivers/net/ethernet/intel/ice/ice_base.c | 3 +-
> drivers/net/ethernet/intel/ice/ice_common.c | 249 +++++--------------------
> drivers/net/ethernet/intel/Kconfig | 3 +
> 5 files changed, 50 insertions(+), 224 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
> index 27208a60cece..a68bea3934e3 100644
> --- a/drivers/net/ethernet/intel/ice/ice_common.h
> +++ b/drivers/net/ethernet/intel/ice/ice_common.h
> @@ -92,9 +92,8 @@ ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle,
> bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq);
> int ice_aq_q_shutdown(struct ice_hw *hw, bool unloading);
> void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode);
> -extern const struct ice_ctx_ele ice_tlan_ctx_info[];
> -int ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx,
> - const struct ice_ctx_ele *ce_info);
> +
> +void ice_pack_txq_ctx(const struct ice_tlan_ctx *ctx, ice_txq_ctx_buf_t *buf);
>
> extern struct mutex ice_global_cfg_lock_sw;
>
> diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
> index a76e5b0e7861..31d4a445d640 100644
> --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
> +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
> @@ -408,20 +408,6 @@ struct ice_rlan_ctx {
> u8 prefena; /* NOTE: normally must be set to 1 at init */
> };
>
> -struct ice_ctx_ele {
> - u16 offset;
> - u16 size_of;
> - u16 width;
> - u16 lsb;
> -};
> -
> -#define ICE_CTX_STORE(_struct, _ele, _width, _lsb) { \
> - .offset = offsetof(struct _struct, _ele), \
> - .size_of = sizeof_field(struct _struct, _ele), \
> - .width = _width, \
> - .lsb = _lsb, \
> -}
> -
> /* for hsplit_0 field of Rx RLAN context */
> enum ice_rlan_ctx_rx_hsplit_0 {
> ICE_RLAN_RX_HSPLIT_0_NO_SPLIT = 0,
> diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
> index 260942877968..0a325dec804e 100644
> --- a/drivers/net/ethernet/intel/ice/ice_base.c
> +++ b/drivers/net/ethernet/intel/ice/ice_base.c
> @@ -909,8 +909,7 @@ ice_vsi_cfg_txq(struct ice_vsi *vsi, struct ice_tx_ring *ring,
> ice_setup_tx_ctx(ring, &tlan_ctx, pf_q);
> /* copy context contents into the qg_buf */
> qg_buf->txqs[0].txq_id = cpu_to_le16(pf_q);
> - ice_set_ctx(hw, (u8 *)&tlan_ctx, (u8 *)&qg_buf->txqs[0].txq_ctx,
> - ice_tlan_ctx_info);
> + ice_pack_txq_ctx(&tlan_ctx, &qg_buf->txqs[0].txq_ctx);
>
> /* init queue specific tail reg. It is referred as
> * transmit comm scheduler queue doorbell.
> diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
> index 48d95cb49864..905f5c745a7b 100644
> --- a/drivers/net/ethernet/intel/ice/ice_common.c
> +++ b/drivers/net/ethernet/intel/ice/ice_common.c
> @@ -6,6 +6,7 @@
> #include "ice_adminq_cmd.h"
> #include "ice_flow.h"
> #include "ice_ptp_hw.h"
> +#include <linux/packing.h>
>
> #define ICE_PF_RESET_WAIT_COUNT 300
> #define ICE_MAX_NETLIST_SIZE 10
> @@ -1385,9 +1386,12 @@ static int ice_copy_rxq_ctx_to_hw(struct ice_hw *hw,
> return 0;
> }
>
> +#define ICE_CTX_STORE(struct_name, struct_field, width, lsb) \
> + PACKED_FIELD((lsb) + (width) - 1, (lsb), struct struct_name, struct_field)
> +
> /* LAN Rx Queue Context */
> -static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
> - /* Field Width LSB */
> +static const struct packed_field_s ice_rlan_ctx_fields[] = {
> + /* Field Width LSB */
> ICE_CTX_STORE(ice_rlan_ctx, head, 13, 0),
> ICE_CTX_STORE(ice_rlan_ctx, cpuid, 8, 13),
> ICE_CTX_STORE(ice_rlan_ctx, base, 57, 32),
> @@ -1408,9 +1412,26 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
> ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena, 1, 196),
> ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh, 3, 198),
> ICE_CTX_STORE(ice_rlan_ctx, prefena, 1, 201),
> - { 0 }
> };
>
> +/**
> + * ice_pack_rxq_ctx - Pack Rx queue context into a HW buffer
> + * @ctx: the Rx queue context to pack
> + * @buf: the HW buffer to pack into
> + *
> + * Pack the Rx queue context from the CPU-friendly unpacked buffer into its
> + * bit-packed HW layout.
> + */
> +static void ice_pack_rxq_ctx(const struct ice_rlan_ctx *ctx,
> + ice_rxq_ctx_buf_t *buf)
> +{
> + CHECK_PACKED_FIELDS_20(ice_rlan_ctx_fields, ICE_RXQ_CTX_SZ);
> + BUILD_BUG_ON(sizeof(*buf) != ICE_RXQ_CTX_SZ);
> +
> + pack_fields(buf, sizeof(*buf), ctx, ice_rlan_ctx_fields,
> + QUIRK_LITTLE_ENDIAN | QUIRK_LSW32_IS_FIRST);
> +}
> +
FWIW, I noticed that smatch bails out checking all the CHECK_PACKED_FIELDS_*
variants >= 20, with the warning:
ice_common.c:1486 ice_pack_txq_ctx() parse error: OOM: 3000148Kb sm_state_count = 413556
ice_common.c:1486 ice_pack_txq_ctx() warn: Function too hairy. No more merges.
ice_common.c:1486 ice_pack_txq_ctx() parse error: Function too hairy. Giving up. 43 second
Maybe this can just be ignored .. not sure :-)
> /**
> * ice_write_rxq_ctx
> * @hw: pointer to the hardware structure
> @@ -1431,12 +1452,13 @@ int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
>
> rlan_ctx->prefena = 1;
>
> - ice_set_ctx(hw, (u8 *)rlan_ctx, (u8 *)&buf, ice_rlan_ctx_info);
> + ice_pack_rxq_ctx(rlan_ctx, &buf);
> +
> return ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index);
> }
>
> /* LAN Tx Queue Context */
> -const struct ice_ctx_ele ice_tlan_ctx_info[] = {
> +static const struct packed_field_s ice_tlan_ctx_fields[] = {
> /* Field Width LSB */
> ICE_CTX_STORE(ice_tlan_ctx, base, 57, 0),
> ICE_CTX_STORE(ice_tlan_ctx, port_num, 3, 57),
> @@ -1465,9 +1487,25 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = {
> ICE_CTX_STORE(ice_tlan_ctx, drop_ena, 1, 165),
> ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx, 2, 166),
> ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx, 3, 168),
> - { 0 }
> };
>
> +/**
> + * ice_pack_txq_ctx - Pack Tx queue context into a HW buffer
> + * @ctx: the Tx queue context to pack
> + * @buf: the HW buffer to pack into
> + *
> + * Pack the Tx queue context from the CPU-friendly unpacked buffer into its
> + * bit-packed HW layout.
> + */
> +void ice_pack_txq_ctx(const struct ice_tlan_ctx *ctx, ice_txq_ctx_buf_t *buf)
> +{
> + CHECK_PACKED_FIELDS_27(ice_tlan_ctx_fields, ICE_TXQ_CTX_SZ);
> + BUILD_BUG_ON(sizeof(*buf) != ICE_TXQ_CTX_SZ);
> +
> + pack_fields(buf, sizeof(*buf), ctx, ice_tlan_ctx_fields,
> + QUIRK_LITTLE_ENDIAN | QUIRK_LSW32_IS_FIRST);
> +}
> +
Same here with the 27 variant.
> /* Sideband Queue command wrappers */
>
> /**
> @@ -4545,205 +4583,6 @@ ice_aq_add_rdma_qsets(struct ice_hw *hw, u8 num_qset_grps,
>
> /* End of FW Admin Queue command wrappers */
>
> -/**
> - * ice_pack_ctx_byte - write a byte to a packed context structure
> - * @src_ctx: unpacked source context structure
> - * @dest_ctx: packed destination context data
> - * @ce_info: context element description
> - */
> -static void ice_pack_ctx_byte(u8 *src_ctx, u8 *dest_ctx,
> - const struct ice_ctx_ele *ce_info)
> -{
> - u8 src_byte, dest_byte, mask;
> - u8 *from, *dest;
> - u16 shift_width;
> -
> - /* copy from the next struct field */
> - from = src_ctx + ce_info->offset;
> -
> - /* prepare the bits and mask */
> - shift_width = ce_info->lsb % 8;
> - mask = GENMASK(ce_info->width - 1 + shift_width, shift_width);
> -
> - src_byte = *from;
> - src_byte <<= shift_width;
> - src_byte &= mask;
> -
> - /* get the current bits from the target bit string */
> - dest = dest_ctx + (ce_info->lsb / 8);
> -
> - memcpy(&dest_byte, dest, sizeof(dest_byte));
> -
> - dest_byte &= ~mask; /* get the bits not changing */
> - dest_byte |= src_byte; /* add in the new bits */
> -
> - /* put it all back */
> - memcpy(dest, &dest_byte, sizeof(dest_byte));
> -}
> -
> -/**
> - * ice_pack_ctx_word - write a word to a packed context structure
> - * @src_ctx: unpacked source context structure
> - * @dest_ctx: packed destination context data
> - * @ce_info: context element description
> - */
> -static void ice_pack_ctx_word(u8 *src_ctx, u8 *dest_ctx,
> - const struct ice_ctx_ele *ce_info)
> -{
> - u16 src_word, mask;
> - __le16 dest_word;
> - u8 *from, *dest;
> - u16 shift_width;
> -
> - /* copy from the next struct field */
> - from = src_ctx + ce_info->offset;
> -
> - /* prepare the bits and mask */
> - shift_width = ce_info->lsb % 8;
> - mask = GENMASK(ce_info->width - 1 + shift_width, shift_width);
> -
> - /* don't swizzle the bits until after the mask because the mask bits
> - * will be in a different bit position on big endian machines
> - */
> - src_word = *(u16 *)from;
> - src_word <<= shift_width;
> - src_word &= mask;
> -
> - /* get the current bits from the target bit string */
> - dest = dest_ctx + (ce_info->lsb / 8);
> -
> - memcpy(&dest_word, dest, sizeof(dest_word));
> -
> - dest_word &= ~(cpu_to_le16(mask)); /* get the bits not changing */
> - dest_word |= cpu_to_le16(src_word); /* add in the new bits */
> -
> - /* put it all back */
> - memcpy(dest, &dest_word, sizeof(dest_word));
> -}
> -
> -/**
> - * ice_pack_ctx_dword - write a dword to a packed context structure
> - * @src_ctx: unpacked source context structure
> - * @dest_ctx: packed destination context data
> - * @ce_info: context element description
> - */
> -static void ice_pack_ctx_dword(u8 *src_ctx, u8 *dest_ctx,
> - const struct ice_ctx_ele *ce_info)
> -{
> - u32 src_dword, mask;
> - __le32 dest_dword;
> - u8 *from, *dest;
> - u16 shift_width;
> -
> - /* copy from the next struct field */
> - from = src_ctx + ce_info->offset;
> -
> - /* prepare the bits and mask */
> - shift_width = ce_info->lsb % 8;
> - mask = GENMASK(ce_info->width - 1 + shift_width, shift_width);
> -
> - /* don't swizzle the bits until after the mask because the mask bits
> - * will be in a different bit position on big endian machines
> - */
> - src_dword = *(u32 *)from;
> - src_dword <<= shift_width;
> - src_dword &= mask;
> -
> - /* get the current bits from the target bit string */
> - dest = dest_ctx + (ce_info->lsb / 8);
> -
> - memcpy(&dest_dword, dest, sizeof(dest_dword));
> -
> - dest_dword &= ~(cpu_to_le32(mask)); /* get the bits not changing */
> - dest_dword |= cpu_to_le32(src_dword); /* add in the new bits */
> -
> - /* put it all back */
> - memcpy(dest, &dest_dword, sizeof(dest_dword));
> -}
> -
> -/**
> - * ice_pack_ctx_qword - write a qword to a packed context structure
> - * @src_ctx: unpacked source context structure
> - * @dest_ctx: packed destination context data
> - * @ce_info: context element description
> - */
> -static void ice_pack_ctx_qword(u8 *src_ctx, u8 *dest_ctx,
> - const struct ice_ctx_ele *ce_info)
> -{
> - u64 src_qword, mask;
> - __le64 dest_qword;
> - u8 *from, *dest;
> - u16 shift_width;
> -
> - /* copy from the next struct field */
> - from = src_ctx + ce_info->offset;
> -
> - /* prepare the bits and mask */
> - shift_width = ce_info->lsb % 8;
> - mask = GENMASK_ULL(ce_info->width - 1 + shift_width, shift_width);
> -
> - /* don't swizzle the bits until after the mask because the mask bits
> - * will be in a different bit position on big endian machines
> - */
> - src_qword = *(u64 *)from;
> - src_qword <<= shift_width;
> - src_qword &= mask;
> -
> - /* get the current bits from the target bit string */
> - dest = dest_ctx + (ce_info->lsb / 8);
> -
> - memcpy(&dest_qword, dest, sizeof(dest_qword));
> -
> - dest_qword &= ~(cpu_to_le64(mask)); /* get the bits not changing */
> - dest_qword |= cpu_to_le64(src_qword); /* add in the new bits */
> -
> - /* put it all back */
> - memcpy(dest, &dest_qword, sizeof(dest_qword));
> -}
> -
> -/**
> - * ice_set_ctx - set context bits in packed structure
> - * @hw: pointer to the hardware structure
> - * @src_ctx: pointer to a generic non-packed context structure
> - * @dest_ctx: pointer to memory for the packed structure
> - * @ce_info: List of Rx context elements
> - */
> -int ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx,
> - const struct ice_ctx_ele *ce_info)
> -{
> - int f;
> -
> - for (f = 0; ce_info[f].width; f++) {
> - /* We have to deal with each element of the FW response
> - * using the correct size so that we are correct regardless
> - * of the endianness of the machine.
> - */
> - if (ce_info[f].width > (ce_info[f].size_of * BITS_PER_BYTE)) {
> - ice_debug(hw, ICE_DBG_QCTX, "Field %d width of %d bits larger than size of %d byte(s) ... skipping write\n",
> - f, ce_info[f].width, ce_info[f].size_of);
> - continue;
> - }
> - switch (ce_info[f].size_of) {
> - case sizeof(u8):
> - ice_pack_ctx_byte(src_ctx, dest_ctx, &ce_info[f]);
> - break;
> - case sizeof(u16):
> - ice_pack_ctx_word(src_ctx, dest_ctx, &ce_info[f]);
> - break;
> - case sizeof(u32):
> - ice_pack_ctx_dword(src_ctx, dest_ctx, &ce_info[f]);
> - break;
> - case sizeof(u64):
> - ice_pack_ctx_qword(src_ctx, dest_ctx, &ce_info[f]);
> - break;
> - default:
> - return -EINVAL;
> - }
> - }
> -
> - return 0;
> -}
> -
Some nice cleanup!
> /**
> * ice_get_lan_q_ctx - get the LAN queue context for the given VSI and TC
> * @hw: pointer to the HW struct
> diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
> index 20bc40eec487..c4ea8ae65a95 100644
> --- a/drivers/net/ethernet/intel/Kconfig
> +++ b/drivers/net/ethernet/intel/Kconfig
> @@ -292,6 +292,9 @@ config ICE
> select DIMLIB
> select LIBIE
> select NET_DEVLINK
> + select PACKING
> + select PACKING_CHECK_FIELDS_20
> + select PACKING_CHECK_FIELDS_27
> select PLDMFW
> select DPLL
> help
>
> --
> 2.47.0.265.g4ca455297942
>
>
/Daniel
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data
2024-10-29 14:50 ` Daniel Machon
@ 2024-10-29 22:09 ` Jacob Keller
2024-10-29 23:32 ` Jacob Keller
0 siblings, 1 reply; 23+ messages in thread
From: Jacob Keller @ 2024-10-29 22:09 UTC (permalink / raw)
To: Daniel Machon
Cc: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada,
linux-kernel, netdev
On 10/29/2024 7:50 AM, Daniel Machon wrote:
> Hi Jacob,
>> +/**
>> + * ice_pack_rxq_ctx - Pack Rx queue context into a HW buffer
>> + * @ctx: the Rx queue context to pack
>> + * @buf: the HW buffer to pack into
>> + *
>> + * Pack the Rx queue context from the CPU-friendly unpacked buffer into its
>> + * bit-packed HW layout.
>> + */
>> +static void ice_pack_rxq_ctx(const struct ice_rlan_ctx *ctx,
>> + ice_rxq_ctx_buf_t *buf)
>> +{
>> + CHECK_PACKED_FIELDS_20(ice_rlan_ctx_fields, ICE_RXQ_CTX_SZ);
>> + BUILD_BUG_ON(sizeof(*buf) != ICE_RXQ_CTX_SZ);
>> +
>> + pack_fields(buf, sizeof(*buf), ctx, ice_rlan_ctx_fields,
>> + QUIRK_LITTLE_ENDIAN | QUIRK_LSW32_IS_FIRST);
>> +}
>> +
>
> FWIW, I noticed that smatch bails out checking all the CHECK_PACKED_FIELDS_*
> variants >= 20, with the warning:
>
> ice_common.c:1486 ice_pack_txq_ctx() parse error: OOM: 3000148Kb sm_state_count = 413556
> ice_common.c:1486 ice_pack_txq_ctx() warn: Function too hairy. No more merges.
> ice_common.c:1486 ice_pack_txq_ctx() parse error: Function too hairy. Giving up. 43 second
>
We might need to wrap these checks to become no-ops when running under
such a checker. It looks like the parser doesn't like the size of the
macros?
> Maybe this can just be ignored .. not sure :-)
>
I would prefer if we found a way to at least silence it, rather than
straight up ignore it.
I am not that familiar with smatch. Let me see if there's an obvious way
we can handle this.
>> +/**
>> + * ice_pack_txq_ctx - Pack Tx queue context into a HW buffer
>> + * @ctx: the Tx queue context to pack
>> + * @buf: the HW buffer to pack into
>> + *
>> + * Pack the Tx queue context from the CPU-friendly unpacked buffer into its
>> + * bit-packed HW layout.
>> + */
>> +void ice_pack_txq_ctx(const struct ice_tlan_ctx *ctx, ice_txq_ctx_buf_t *buf)
>> +{
>> + CHECK_PACKED_FIELDS_27(ice_tlan_ctx_fields, ICE_TXQ_CTX_SZ);
>> + BUILD_BUG_ON(sizeof(*buf) != ICE_TXQ_CTX_SZ);
>> +
>> + pack_fields(buf, sizeof(*buf), ctx, ice_tlan_ctx_fields,
>> + QUIRK_LITTLE_ENDIAN | QUIRK_LSW32_IS_FIRST);
>> +}
>> +
>
> Same here with the 27 variant.
>
Yea, same problem of course. The generated checks are too large to be
parsed by smatch.
<snip removed code>
>
> Some nice cleanup!
>
Yea. I got motivated with this because I was looking at introducing the
inverse 'unpacking' variants for supporting VF live migration, and
realized how ugly the existing code was and how much worse adding yet
more code to unpack would be.
>
> /Daniel
Thanks for the careful review!
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data
2024-10-29 22:09 ` Jacob Keller
@ 2024-10-29 23:32 ` Jacob Keller
2024-10-30 11:19 ` Dan Carpenter
0 siblings, 1 reply; 23+ messages in thread
From: Jacob Keller @ 2024-10-29 23:32 UTC (permalink / raw)
To: Daniel Machon, Dan Carpenter
Cc: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada,
linux-kernel, netdev
On 10/29/2024 3:09 PM, Jacob Keller wrote:
>
>
> On 10/29/2024 7:50 AM, Daniel Machon wrote:
>> Hi Jacob,
>
>>> +/**
>>> + * ice_pack_rxq_ctx - Pack Rx queue context into a HW buffer
>>> + * @ctx: the Rx queue context to pack
>>> + * @buf: the HW buffer to pack into
>>> + *
>>> + * Pack the Rx queue context from the CPU-friendly unpacked buffer into its
>>> + * bit-packed HW layout.
>>> + */
>>> +static void ice_pack_rxq_ctx(const struct ice_rlan_ctx *ctx,
>>> + ice_rxq_ctx_buf_t *buf)
>>> +{
>>> + CHECK_PACKED_FIELDS_20(ice_rlan_ctx_fields, ICE_RXQ_CTX_SZ);
>>> + BUILD_BUG_ON(sizeof(*buf) != ICE_RXQ_CTX_SZ);
>>> +
>>> + pack_fields(buf, sizeof(*buf), ctx, ice_rlan_ctx_fields,
>>> + QUIRK_LITTLE_ENDIAN | QUIRK_LSW32_IS_FIRST);
>>> +}
>>> +
>>
>> FWIW, I noticed that smatch bails out checking all the CHECK_PACKED_FIELDS_*
>> variants >= 20, with the warning:
>>
>> ice_common.c:1486 ice_pack_txq_ctx() parse error: OOM: 3000148Kb sm_state_count = 413556
>> ice_common.c:1486 ice_pack_txq_ctx() warn: Function too hairy. No more merges.
>> ice_common.c:1486 ice_pack_txq_ctx() parse error: Function too hairy. Giving up. 43 second
>>
>
> We might need to wrap these checks to become no-ops when running under
> such a checker. It looks like the parser doesn't like the size of the
> macros?
>
>> Maybe this can just be ignored .. not sure :-)
>>
>
> I would prefer if we found a way to at least silence it, rather than
> straight up ignore it.
>
> I am not that familiar with smatch. Let me see if there's an obvious way
> we can handle this.
>
From the look of this, smatch is running out of memory trying to keep
track of a number of possible states. Likely the depth of the macro is
too high once we get beyond 20.
I think the simplest solution would be to disable these macros under
__CHECKER__, and possibly even write our own checker in smatch which can
cover this case..
@Dan Carpenter:
Any chance you could provide guidance here?
Thanks,
Jake
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data
2024-10-29 23:32 ` Jacob Keller
@ 2024-10-30 11:19 ` Dan Carpenter
2024-10-30 20:34 ` Jacob Keller
0 siblings, 1 reply; 23+ messages in thread
From: Dan Carpenter @ 2024-10-30 11:19 UTC (permalink / raw)
To: Jacob Keller
Cc: Daniel Machon, Vladimir Oltean, Andrew Morton, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Tony Nguyen, Przemek Kitszel,
Masahiro Yamada, linux-kernel, netdev
Always just ignore the tool when it if it's not useful.
CHECK_PACKED_FIELDS_ macros are just build time asserts, right? I can easily
just hard code Smatch to ignore CHECK_PACKED_FIELDS_* macros. I'm just going to
go ahead an do that in the ugliest way possible. If we have a lot of these then
I'll do it properly.
regards,
dan carpenter
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data
2024-10-30 11:19 ` Dan Carpenter
@ 2024-10-30 20:34 ` Jacob Keller
2024-10-31 7:46 ` Dan Carpenter
0 siblings, 1 reply; 23+ messages in thread
From: Jacob Keller @ 2024-10-30 20:34 UTC (permalink / raw)
To: Dan Carpenter
Cc: Daniel Machon, Vladimir Oltean, Andrew Morton, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Tony Nguyen, Przemek Kitszel,
Masahiro Yamada, linux-kernel, netdev
On 10/30/2024 4:19 AM, Dan Carpenter wrote:
> Always just ignore the tool when it if it's not useful.
>
> CHECK_PACKED_FIELDS_ macros are just build time asserts, right? I can easily
> just hard code Smatch to ignore CHECK_PACKED_FIELDS_* macros. I'm just going to
> go ahead an do that in the ugliest way possible. If we have a lot of these then
> I'll do it properly.
>
We have 2 for ice, and likely a handful for some of the drivers Vladimir
is working on. More may happen in the future, but the number is likely
to unlikely to grow quickly.
I was thinking of making them empty definitions if __CHECKER__, but
ignoring them in smatch would be easier on my end :D
> regards,
> dan carpenter
>
Looking at how smatch works, it actually seems like we could implement
the desired sanity checks in smatch, though I wasn't quite able to
figure out how to hook into struct/array assignments to do that yet.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data
2024-10-30 20:34 ` Jacob Keller
@ 2024-10-31 7:46 ` Dan Carpenter
2024-11-07 17:32 ` Jacob Keller
0 siblings, 1 reply; 23+ messages in thread
From: Dan Carpenter @ 2024-10-31 7:46 UTC (permalink / raw)
To: Jacob Keller
Cc: Daniel Machon, Vladimir Oltean, Andrew Morton, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Tony Nguyen, Przemek Kitszel,
Masahiro Yamada, linux-kernel, netdev
On Wed, Oct 30, 2024 at 01:34:47PM -0700, Jacob Keller wrote:
>
>
> On 10/30/2024 4:19 AM, Dan Carpenter wrote:
> > Always just ignore the tool when it if it's not useful.
> >
> > CHECK_PACKED_FIELDS_ macros are just build time asserts, right? I can easily
> > just hard code Smatch to ignore CHECK_PACKED_FIELDS_* macros. I'm just going to
> > go ahead an do that in the ugliest way possible. If we have a lot of these then
> > I'll do it properly.
> >
>
> We have 2 for ice, and likely a handful for some of the drivers Vladimir
> is working on. More may happen in the future, but the number is likely
> to unlikely to grow quickly.
>
> I was thinking of making them empty definitions if __CHECKER__, but
> ignoring them in smatch would be easier on my end :D
>
Adding them to __CHECKER__ works too.
> > regards,
> > dan carpenter
> >
> Looking at how smatch works, it actually seems like we could implement
> the desired sanity checks in smatch, though I wasn't quite able to
> figure out how to hook into struct/array assignments to do that yet.
I'd do it the way you have. It's better to be close to the code. It's way
harder in Smatch and it's not like you need flow analysis.
regards,
dan carpenter
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data
2024-10-31 7:46 ` Dan Carpenter
@ 2024-11-07 17:32 ` Jacob Keller
0 siblings, 0 replies; 23+ messages in thread
From: Jacob Keller @ 2024-11-07 17:32 UTC (permalink / raw)
To: Dan Carpenter
Cc: Daniel Machon, Vladimir Oltean, Andrew Morton, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Tony Nguyen, Przemek Kitszel,
Masahiro Yamada, linux-kernel, netdev
On 10/31/2024 12:46 AM, Dan Carpenter wrote:
> On Wed, Oct 30, 2024 at 01:34:47PM -0700, Jacob Keller wrote:
>>
>>
>> On 10/30/2024 4:19 AM, Dan Carpenter wrote:
>>> Always just ignore the tool when it if it's not useful.
>>>
>>> CHECK_PACKED_FIELDS_ macros are just build time asserts, right? I can easily
>>> just hard code Smatch to ignore CHECK_PACKED_FIELDS_* macros. I'm just going to
>>> go ahead an do that in the ugliest way possible. If we have a lot of these then
>>> I'll do it properly.
>>>
>>
>> We have 2 for ice, and likely a handful for some of the drivers Vladimir
>> is working on. More may happen in the future, but the number is likely
>> to unlikely to grow quickly.
>>
>> I was thinking of making them empty definitions if __CHECKER__, but
>> ignoring them in smatch would be easier on my end :D
>>
>
> Adding them to __CHECKER__ works too.
Jakub suggested implementing the checks in modpost, which means the
CHECK_PACKED_FIELDS macros won't be merged.
I saw you did end up updating smatch to handle this, so wanted to let
you know it looks like it won't be necessary now.
Thanks,
Jake
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v2 7/9] ice: reduce size of queue context fields
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
` (5 preceding siblings ...)
2024-10-26 0:04 ` [PATCH net-next v2 6/9] ice: use <linux/packing.h> for Tx and Rx queue context data Jacob Keller
@ 2024-10-26 0:04 ` Jacob Keller
2024-10-26 0:05 ` [PATCH net-next v2 8/9] ice: move prefetch enable to ice_setup_rx_ctx Jacob Keller
` (3 subsequent siblings)
10 siblings, 0 replies; 23+ messages in thread
From: Jacob Keller @ 2024-10-26 0:04 UTC (permalink / raw)
To: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada
Cc: linux-kernel, netdev, Jacob Keller
The ice_rlan_ctx and ice_tlan_ctx structures have some fields which are
intentionally sized larger necessary relative to the packed sizes the data
must fit into. This was done because the original ice_set_ctx() function
and its helpers did not correctly handle packing when the packed bits
straddled a byte. This is no longer the case with the use of the
<linux/packing.h> implementation.
Save some bytes in these structures by sizing the variables to the number
of bytes the actual bitpacked fields fit into.
There are a couple of gaps left in the structure, which is a result of the
fields being in the order they appear in the packed bit layout, but where
alignment forces some extra gaps. We could fix this, saving ~8 bytes from
each structure. However, these structures are not used heavily, and the
resulting savings is minimal:
$ bloat-o-meter ice-before-reorder.ko ice-after-reorder.ko
add/remove: 0/0 grow/shrink: 1/1 up/down: 26/-70 (-44)
Function old new delta
ice_vsi_cfg_txq 1873 1899 +26
ice_setup_rx_ctx.constprop 1529 1459 -70
Total: Before=1459555, After=1459511, chg -0.00%
Thus, the fields are left in the same order as the packed bit layout,
despite the gaps this causes.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 32 ++++++++------------------
1 file changed, 10 insertions(+), 22 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
index 31d4a445d640..1479b45738af 100644
--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
@@ -375,23 +375,17 @@ enum ice_rx_flex_desc_status_error_1_bits {
#define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS 5
#define GLTCLAN_CQ_CNTX(i, CQ) (GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800))
-/* RLAN Rx queue context data
- *
- * The sizes of the variables may be larger than needed due to crossing byte
- * boundaries. If we do not have the width of the variable set to the correct
- * size then we could end up shifting bits off the top of the variable when the
- * variable is at the top of a byte and crosses over into the next byte.
- */
+/* RLAN Rx queue context data */
struct ice_rlan_ctx {
u16 head;
- u16 cpuid; /* bigger than needed, see above for reason */
+ u8 cpuid;
#define ICE_RLAN_BASE_S 7
u64 base;
u16 qlen;
#define ICE_RLAN_CTX_DBUF_S 7
- u16 dbuf; /* bigger than needed, see above for reason */
+ u8 dbuf;
#define ICE_RLAN_CTX_HBUF_S 6
- u16 hbuf; /* bigger than needed, see above for reason */
+ u8 hbuf;
u8 dtype;
u8 dsize;
u8 crcstrip;
@@ -399,12 +393,12 @@ struct ice_rlan_ctx {
u8 hsplit_0;
u8 hsplit_1;
u8 showiv;
- u32 rxmax; /* bigger than needed, see above for reason */
+ u16 rxmax;
u8 tphrdesc_ena;
u8 tphwdesc_ena;
u8 tphdata_ena;
u8 tphhead_ena;
- u16 lrxqthresh; /* bigger than needed, see above for reason */
+ u8 lrxqthresh;
u8 prefena; /* NOTE: normally must be set to 1 at init */
};
@@ -535,18 +529,12 @@ enum ice_tx_ctx_desc_eipt_offload {
#define ICE_LAN_TXQ_MAX_QGRPS 127
#define ICE_LAN_TXQ_MAX_QDIS 1023
-/* Tx queue context data
- *
- * The sizes of the variables may be larger than needed due to crossing byte
- * boundaries. If we do not have the width of the variable set to the correct
- * size then we could end up shifting bits off the top of the variable when the
- * variable is at the top of a byte and crosses over into the next byte.
- */
+/* Tx queue context data */
struct ice_tlan_ctx {
#define ICE_TLAN_CTX_BASE_S 7
u64 base; /* base is defined in 128-byte units */
u8 port_num;
- u16 cgd_num; /* bigger than needed, see above for reason */
+ u8 cgd_num;
u8 pf_num;
u16 vmvf_num;
u8 vmvf_type;
@@ -557,7 +545,7 @@ struct ice_tlan_ctx {
u8 tsyn_ena;
u8 internal_usage_flag;
u8 alt_vlan;
- u16 cpuid; /* bigger than needed, see above for reason */
+ u8 cpuid;
u8 wb_mode;
u8 tphrd_desc;
u8 tphrd;
@@ -566,7 +554,7 @@ struct ice_tlan_ctx {
u16 qnum_in_func;
u8 itr_notification_mode;
u8 adjust_prof_id;
- u32 qlen; /* bigger than needed, see above for reason */
+ u16 qlen;
u8 quanta_prof_idx;
u8 tso_ena;
u16 tso_qnum;
--
2.47.0.265.g4ca455297942
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH net-next v2 8/9] ice: move prefetch enable to ice_setup_rx_ctx
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
` (6 preceding siblings ...)
2024-10-26 0:04 ` [PATCH net-next v2 7/9] ice: reduce size of queue context fields Jacob Keller
@ 2024-10-26 0:05 ` Jacob Keller
2024-10-26 0:05 ` [PATCH net-next v2 9/9] ice: cleanup Rx queue context programming functions Jacob Keller
` (2 subsequent siblings)
10 siblings, 0 replies; 23+ messages in thread
From: Jacob Keller @ 2024-10-26 0:05 UTC (permalink / raw)
To: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada
Cc: linux-kernel, netdev, Jacob Keller
The ice_write_rxq_ctx() function is responsible for programming the Rx
Queue context into hardware. It receives the configuration in unpacked form
via the ice_rlan_ctx structure.
This function unconditionally modifies the context to set the prefetch
enable bit. This was done by commit c31a5c25bb19 ("ice: Always set prefena
when configuring an Rx queue"). Setting this bit makes sense, since
prefetching descriptors is almost always the preferred behavior.
However, the ice_write_rxq_ctx() function is not the place that actually
defines the queue context. We initialize the Rx Queue context in
ice_setup_rx_ctx(). It is surprising to have the Rx queue context changed
by a function who's responsibility is to program the given context to
hardware.
Following the principle of least surprise, move the setting of the prefetch
enable bit out of ice_write_rxq_ctx() and into the ice_setup_rx_ctx().
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
drivers/net/ethernet/intel/ice/ice_base.c | 3 +++
drivers/net/ethernet/intel/ice/ice_common.c | 9 +++------
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index 0a325dec804e..f1fbba19e4e4 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -453,6 +453,9 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring)
/* Rx queue threshold in units of 64 */
rlan_ctx.lrxqthresh = 1;
+ /* Enable descriptor prefetch */
+ rlan_ctx.prefena = 1;
+
/* PF acts as uplink for switchdev; set flex descriptor with src_vsi
* metadata and flags to allow redirecting to PR netdev
*/
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 905f5c745a7b..3cc56b85a480 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -1433,14 +1433,13 @@ static void ice_pack_rxq_ctx(const struct ice_rlan_ctx *ctx,
}
/**
- * ice_write_rxq_ctx
+ * ice_write_rxq_ctx - Write Rx Queue context to hardware
* @hw: pointer to the hardware structure
* @rlan_ctx: pointer to the rxq context
* @rxq_index: the index of the Rx queue
*
- * Converts rxq context from sparse to dense structure and then writes
- * it to HW register space and enables the hardware to prefetch descriptors
- * instead of only fetching them on demand
+ * Pack the sparse Rx Queue context into dense hardware format and write it
+ * into the HW register space.
*/
int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
u32 rxq_index)
@@ -1450,8 +1449,6 @@ int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
if (!rlan_ctx)
return -EINVAL;
- rlan_ctx->prefena = 1;
-
ice_pack_rxq_ctx(rlan_ctx, &buf);
return ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index);
--
2.47.0.265.g4ca455297942
^ permalink raw reply related [flat|nested] 23+ messages in thread* [PATCH net-next v2 9/9] ice: cleanup Rx queue context programming functions
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
` (7 preceding siblings ...)
2024-10-26 0:05 ` [PATCH net-next v2 8/9] ice: move prefetch enable to ice_setup_rx_ctx Jacob Keller
@ 2024-10-26 0:05 ` Jacob Keller
2024-10-29 18:28 ` [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Vladimir Oltean
2024-10-31 9:30 ` Daniel Machon
10 siblings, 0 replies; 23+ messages in thread
From: Jacob Keller @ 2024-10-26 0:05 UTC (permalink / raw)
To: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada
Cc: linux-kernel, netdev, Jacob Keller
The ice_copy_rxq_ctx_to_hw() and ice_write_rxq_ctx() functions perform some
defensive checks which are typically frowned upon by kernel style
guidelines.
In particular, NULL checks on buffers which point to the stack are
discouraged, especially when the functions are static and only called once.
Checks of this sort only serve to hide potential programming error, as we
will not produce the normal crash dump on a NULL access.
In addition, ice_copy_rxq_ctx_to_hw() cannot fail in another way, so could
be made void.
Future support for VF Live Migration will need to introduce an inverse
function for reading Rx queue context from HW registers to unpack it, as
well as functions to pack and unpack Tx queue context from HW.
Rather than copying these style issues into the new functions, lets first
cleanup the existing code.
For the ice_copy_rxq_ctx_to_hw() function:
* Move the Rx queue index check out of this function.
* Convert the function to a void return.
* Use a simple int variable instead of a u8 for the for loop index, and
initialize it inside the for loop.
* Update the function description to better align with kernel doc style.
For the ice_write_rxq_ctx() function:
* Move the Rx queue index check into this function.
* Update the function description with a Returns: to align with kernel doc
style.
These changes align the existing write functions to current kernel
style, and will align with the style of the new functions added when we
implement live migration in a future series.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
---
drivers/net/ethernet/intel/ice/ice_common.c | 28 +++++++++++-----------------
1 file changed, 11 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 3cc56b85a480..54ebd2969203 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -1358,32 +1358,23 @@ int ice_reset(struct ice_hw *hw, enum ice_reset_req req)
}
/**
- * ice_copy_rxq_ctx_to_hw
+ * ice_copy_rxq_ctx_to_hw - Copy packed Rx queue context to HW registers
* @hw: pointer to the hardware structure
* @rxq_ctx: pointer to the packed Rx queue context
* @rxq_index: the index of the Rx queue
- *
- * Copies rxq context from dense structure to HW register space
*/
-static int ice_copy_rxq_ctx_to_hw(struct ice_hw *hw,
- const ice_rxq_ctx_buf_t *rxq_ctx,
- u32 rxq_index)
+static void ice_copy_rxq_ctx_to_hw(struct ice_hw *hw,
+ const ice_rxq_ctx_buf_t *rxq_ctx,
+ u32 rxq_index)
{
- u8 i;
-
- if (rxq_index > QRX_CTRL_MAX_INDEX)
- return -EINVAL;
-
/* Copy each dword separately to HW */
- for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
+ for (int i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) {
u32 ctx = ((const u32 *)rxq_ctx)[i];
wr32(hw, QRX_CONTEXT(i, rxq_index), ctx);
ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i, ctx);
}
-
- return 0;
}
#define ICE_CTX_STORE(struct_name, struct_field, width, lsb) \
@@ -1435,23 +1426,26 @@ static void ice_pack_rxq_ctx(const struct ice_rlan_ctx *ctx,
/**
* ice_write_rxq_ctx - Write Rx Queue context to hardware
* @hw: pointer to the hardware structure
- * @rlan_ctx: pointer to the rxq context
+ * @rlan_ctx: pointer to the unpacked Rx queue context
* @rxq_index: the index of the Rx queue
*
* Pack the sparse Rx Queue context into dense hardware format and write it
* into the HW register space.
+ *
+ * Return: 0 on success, or -EINVAL if the Rx queue index is invalid.
*/
int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
u32 rxq_index)
{
ice_rxq_ctx_buf_t buf = {};
- if (!rlan_ctx)
+ if (rxq_index > QRX_CTRL_MAX_INDEX)
return -EINVAL;
ice_pack_rxq_ctx(rlan_ctx, &buf);
+ ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index);
- return ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index);
+ return 0;
}
/* LAN Tx Queue Context */
--
2.47.0.265.g4ca455297942
^ permalink raw reply related [flat|nested] 23+ messages in thread* Re: [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
` (8 preceding siblings ...)
2024-10-26 0:05 ` [PATCH net-next v2 9/9] ice: cleanup Rx queue context programming functions Jacob Keller
@ 2024-10-29 18:28 ` Vladimir Oltean
2024-10-31 9:30 ` Daniel Machon
10 siblings, 0 replies; 23+ messages in thread
From: Vladimir Oltean @ 2024-10-29 18:28 UTC (permalink / raw)
To: Jacob Keller
Cc: Andrew Morton, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Tony Nguyen, Przemek Kitszel, Masahiro Yamada, linux-kernel,
netdev, Vladimir Oltean
On Fri, Oct 25, 2024 at 05:04:52PM -0700, Jacob Keller wrote:
> This series improves the packing library with a new API for packing or
> unpacking a large number of fields at once with minimal code footprint. The
> API is then used to replace bespoke packing logic in the ice driver,
> preparing it to handle unpacking in the future. Finally, the ice driver has
> a few other cleanups related to the packing logic.
>
> The pack_fields and unpack_fields functions have the following improvements
> over the existing pack() and unpack() API:
>
> 1. Packing or unpacking a large number of fields takes significantly less
> code. This significantly reduces the .text size for an increase in the
> .data size which is much smaller.
>
> 2. The unpacked data can be stored in sizes smaller than u64 variables.
> This reduces the storage requirement both for runtime data structures,
> and for the rodata defining the fields. This scales with the number of
> fields used.
>
> 3. Most of the error checking is done at compile time, rather than
> runtime via CHECK_PACKED_FIELD_* macros. This saves wasted computation
> time, *and* catches errors in the field definitions immediately instead
> of only after the offending code executes.
>
> The actual packing and unpacking code still uses the u64 size
> variables. However, these are converted to the appropriate field sizes when
> storing or reading the data from the buffer.
>
> One complexity is that the CHECK_PACKED_FIELD_* macros need to be defined
> one per size of the packed_fields array. This is because we don't have a
> good way to handle the ordering checks otherwise. The C pre-processor is
> unable to generate and run variable length loops at compile time.
>
> This is a significant amount of macro code, ~22,000 lines of code. To
> ensure it is correct and to avoid needing to store this directly in the
> kernel history, this file is generated as <generated/packing-checks.h> via
> a small C program, gen_packing_checks. To generate this, we need to update
> the top level Kbuild process to include the compilation of
> gen_packing_checks and execution to generate the packing-checks.h file.
>
> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
> ---
> Changes in v2:
> - Add my missing sign-off to the first patch
> - Update the descriptions for a few patches
> - Only generate CHECK_PACKED_FIELDS_N when another module selects it
> - Add a new patch introducing wrapper structures for the packed Tx and Rx
> queue context, suggested by Vladimir.
> - Drop the now unnecessary macros in ice, thanks to the new types
> - Link to v1: https://lore.kernel.org/r/20241011-packing-pack-fields-and-ice-implementation-v1-0-d9b1f7500740@intel.com
For the set:
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Thanks for working on this!
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields
2024-10-26 0:04 [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Jacob Keller
` (9 preceding siblings ...)
2024-10-29 18:28 ` [PATCH net-next v2 0/9] lib: packing: introduce and use (un)pack_fields Vladimir Oltean
@ 2024-10-31 9:30 ` Daniel Machon
10 siblings, 0 replies; 23+ messages in thread
From: Daniel Machon @ 2024-10-31 9:30 UTC (permalink / raw)
To: Jacob Keller
Cc: Vladimir Oltean, Andrew Morton, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Tony Nguyen, Przemek Kitszel, Masahiro Yamada,
linux-kernel, netdev, Vladimir Oltean
> This series improves the packing library with a new API for packing or
> unpacking a large number of fields at once with minimal code footprint. The
> API is then used to replace bespoke packing logic in the ice driver,
> preparing it to handle unpacking in the future. Finally, the ice driver has
> a few other cleanups related to the packing logic.
>
> The pack_fields and unpack_fields functions have the following improvements
> over the existing pack() and unpack() API:
>
> 1. Packing or unpacking a large number of fields takes significantly less
> code. This significantly reduces the .text size for an increase in the
> .data size which is much smaller.
>
> 2. The unpacked data can be stored in sizes smaller than u64 variables.
> This reduces the storage requirement both for runtime data structures,
> and for the rodata defining the fields. This scales with the number of
> fields used.
>
> 3. Most of the error checking is done at compile time, rather than
> runtime via CHECK_PACKED_FIELD_* macros. This saves wasted computation
> time, *and* catches errors in the field definitions immediately instead
> of only after the offending code executes.
>
> The actual packing and unpacking code still uses the u64 size
> variables. However, these are converted to the appropriate field sizes when
> storing or reading the data from the buffer.
>
> One complexity is that the CHECK_PACKED_FIELD_* macros need to be defined
> one per size of the packed_fields array. This is because we don't have a
> good way to handle the ordering checks otherwise. The C pre-processor is
> unable to generate and run variable length loops at compile time.
>
> This is a significant amount of macro code, ~22,000 lines of code. To
> ensure it is correct and to avoid needing to store this directly in the
> kernel history, this file is generated as <generated/packing-checks.h> via
> a small C program, gen_packing_checks. To generate this, we need to update
> the top level Kbuild process to include the compilation of
> gen_packing_checks and execution to generate the packing-checks.h file.
>
Hi Jacob,
As for the rest of the patches:
Reviewed-by: Daniel Machon <daniel.machon@microchip.com>
I can confirm that smatch does not complain anymore, after Dan's recent
commit that skips the macros.
/Daniel
^ permalink raw reply [flat|nested] 23+ messages in thread