* [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl
@ 2026-05-04 19:06 Rob Clark
0 siblings, 0 replies; 3+ messages in thread
From: Rob Clark @ 2026-05-04 19:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark,
Abhinav Kumar, Bill Wendling, David Airlie, Dmitry Baryshkov,
Jessica Zhang, Justin Stitt, Konrad Dybcio, open list,
open list:CLANG/LLVM BUILD SUPPORT:Keyword:b(?i:clang|llvm)b,
Maarten Lankhorst, Marijn Suijten, Maxime Ripard,
Nathan Chancellor, Nick Desaulniers, Sean Paul, Simona Vetter,
Thomas Zimmermann
Add a new PERFCNTR_CONFIG ioctl, serving two functions:
1. Global counter collection (restricted to perfmon_capable()) using the
MSM_PERFCNTR_STREAM flag. Global counter sampling is, global, across
contexts. Only a single global counter stream is allowed at a time.
2. Reserve counters for local counter collection. Local counter
collection is local to a cmdstream (GEM_SUBMIT), and as such is
allowed in all processes without additional privileges.
The kernel enforces that counters assigned for global counter collection
do not conflict with counters reserved for local counter collection, and
visa versa. Since local counter collection is scoped to a single cmd-
stream, multiple UMD processes can overlap in their reserved counters.
But cannot conflict with global counter usage.
In the case of local counter collection, the UMD is still responsible
for programming the corresponding SELect registers, and sampling the
counter values, from it's cmdstream. But by performing the reservation
step, the UMD protects itself from the kernel trying to use the same
SEL/counter regs for global counter collection.
For global counter collection, the kernel programs SEL regs, and sets up
a timer for counter sampling. Userspace reads out the sampled values
from the returned perfcntr stream fd. Releasing the global perfcntr
stream is simply a matter of close()ing the fd.
The final two patches wire up the needed support for global counter
stream collection while IFPC is active, and drops disabling of IFPC.
The mesa side of this is at:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/41158
igt test at:
https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/perfcntrs
Changes in v3:
- Fix loop counter issue spotted by Claude review
- Add MSM_PERFCNTR_UPDATE flag to ask kernel to return the actual # of
available counters in case of -E2BIG
- Proper barriers for modifying pwrup_reglist
- Link to v2: https://lore.kernel.org/all/20260424151140.104093-1-robin.clark@oss.qualcomm.com
Changes in v2:
- Rework makefile magic based on Dmitry's suggestion, and add a2xx/a5xx
perfcntr tables (although only a6xx+ is supported at this point)
- Fix compile error for compilers that are picky about a struct that
only contains a flex array
- Drop a6xx_idle() under gpu->lock in a6xx_perfcntr_configure(), replace
with perfcntr_fence that sel_worker can check
- Add a7xx+ pwrup_reglist support for restoring SELect regs on exit from
IFPC. (a6xx doesn't support IFPC, and the pwrup_reglist works a bit
differently)
- Stop disabling IFPC when global counter stream is active.
- Link to v1: https://lore.kernel.org/all/20260420222621.417276-1-robin.clark@oss.qualcomm.com/
Rob Clark (16):
drm/msm: Remove obsolete perf infrastructure
drm/msm: Allow CAP_PERFMON for setting SYSPROF
drm/msm/adreno: Sync registers from mesa
drm/msm/registers: Sync gen_header.py from mesa
drm/msm/registers: Add perfcntr json
drm/msm: Add a6xx+ perfcntr tables
drm/msm: Add sysprof accessors
drm/msm/a6xx: Add yield & flush helper
drm/msm: Add per-context perfcntr state
drm/msm: Add basic perfcntr infrastructure
drm/msm/a6xx+: Add support to configure perfcntrs
drm/msm/a8xx: Add perfcntr flush sequence
drm/msm: Add PERFCNTR_CONFIG ioctl
drm/msm/a6xx: Increase pwrup_reglist size
drm/msm/a6xx: Append SEL regs to dyn pwrup reglist
drm/msm/a6xx: Allow IFPC with perfcntr stream
drivers/gpu/drm/msm/Makefile | 27 +-
drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 7 -
drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 16 -
drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 3 -
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 16 +-
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 10 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 217 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 15 +-
drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +-
drivers/gpu/drm/msm/adreno/a8xx_gpu.c | 33 +-
drivers/gpu/drm/msm/adreno/a8xx_preempt.c | 2 +-
drivers/gpu/drm/msm/adreno/adreno_device.c | 8 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +-
drivers/gpu/drm/msm/msm_debugfs.c | 6 -
drivers/gpu/drm/msm/msm_drv.c | 2 +-
drivers/gpu/drm/msm/msm_drv.h | 13 +-
drivers/gpu/drm/msm/msm_gpu.c | 119 +-
drivers/gpu/drm/msm/msm_gpu.h | 104 +-
drivers/gpu/drm/msm/msm_perf.c | 235 --
drivers/gpu/drm/msm/msm_perfcntr.c | 638 +++++
drivers/gpu/drm/msm/msm_perfcntr.h | 155 ++
drivers/gpu/drm/msm/msm_ringbuffer.h | 2 +
drivers/gpu/drm/msm/msm_submitqueue.c | 3 +-
.../msm/registers/adreno/a2xx_perfcntrs.json | 109 +
drivers/gpu/drm/msm/registers/adreno/a3xx.xml | 8 +-
drivers/gpu/drm/msm/registers/adreno/a5xx.xml | 141 +-
.../msm/registers/adreno/a5xx_perfcntrs.json | 128 +
drivers/gpu/drm/msm/registers/adreno/a6xx.xml | 1300 ++++++-----
.../msm/registers/adreno/a6xx_descriptors.xml | 71 +-
.../drm/msm/registers/adreno/a6xx_enums.xml | 3 +
.../msm/registers/adreno/a6xx_perfcntrs.json | 105 +
.../msm/registers/adreno/a7xx_perfcntrs.json | 228 ++
.../msm/registers/adreno/a8xx_descriptors.xml | 96 +-
.../msm/registers/adreno/a8xx_perfcntrs.json | 240 ++
.../msm/registers/adreno/a8xx_perfcntrs.xml | 1929 +++++++++++++++
.../msm/registers/adreno/adreno_common.xml | 42 +
.../drm/msm/registers/adreno/adreno_pm4.xml | 50 +-
drivers/gpu/drm/msm/registers/gen_header.py | 2079 +++++++++--------
include/uapi/drm/msm_drm.h | 48 +
39 files changed, 6005 insertions(+), 2212 deletions(-)
delete mode 100644 drivers/gpu/drm/msm/msm_perf.c
create mode 100644 drivers/gpu/drm/msm/msm_perfcntr.c
create mode 100644 drivers/gpu/drm/msm/msm_perfcntr.h
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a2xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a5xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a8xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a8xx_perfcntrs.xml
--
2.54.0
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl
@ 2026-05-06 17:10 Rob Clark
2026-05-06 17:10 ` [PATCH v4 04/16] drm/msm/registers: Sync gen_header.py from mesa Rob Clark
0 siblings, 1 reply; 3+ messages in thread
From: Rob Clark @ 2026-05-06 17:10 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark,
Abhinav Kumar, Bill Wendling, David Airlie, Dmitry Baryshkov,
Jessica Zhang, Justin Stitt, Konrad Dybcio, open list,
open list:CLANG/LLVM BUILD SUPPORT:Keyword:b(?i:clang|llvm)b,
Maarten Lankhorst, Marijn Suijten, Maxime Ripard,
Nathan Chancellor, Nick Desaulniers, Sean Paul, Simona Vetter,
Thomas Zimmermann
The "at least Claude reviews my patches" edition.
Add a new PERFCNTR_CONFIG ioctl, serving two functions:
1. Global counter collection (restricted to perfmon_capable()) using the
MSM_PERFCNTR_STREAM flag. Global counter sampling is, global, across
contexts. Only a single global counter stream is allowed at a time.
2. Reserve counters for local counter collection. Local counter
collection is local to a cmdstream (GEM_SUBMIT), and as such is
allowed in all processes without additional privileges.
The kernel enforces that counters assigned for global counter collection
do not conflict with counters reserved for local counter collection, and
visa versa. Since local counter collection is scoped to a single cmd-
stream, multiple UMD processes can overlap in their reserved counters.
But cannot conflict with global counter usage.
In the case of local counter collection, the UMD is still responsible
for programming the corresponding SELect registers, and sampling the
counter values, from it's cmdstream. But by performing the reservation
step, the UMD protects itself from the kernel trying to use the same
SEL/counter regs for global counter collection.
For global counter collection, the kernel programs SEL regs, and sets up
a timer for counter sampling. Userspace reads out the sampled values
from the returned perfcntr stream fd. Releasing the global perfcntr
stream is simply a matter of close()ing the fd.
The final two patches wire up the needed support for global counter
stream collection while IFPC is active, and drops disabling of IFPC.
The mesa side of this is at:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/41158
igt test at:
https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/perfcntrs
wiki page about the design:
https://gitlab.freedesktop.org/drm/msm/-/wikis/adreno:-perfcounter-UABI
Changes in v4:
- Fix null ptr deref on older gens without perfcntr support [Claude]
- Add upper limit to userspace controlled FIFO size [Claude]
- Fix nr_regs calculation [Claude]
- Link to v3: https://lore.kernel.org/all/20260504190751.61052-1-robin.clark@oss.qualcomm.com/
Changes in v3:
- Fix loop counter issue spotted by Claude review
- Add MSM_PERFCNTR_UPDATE flag to ask kernel to return the actual # of
available counters in case of -E2BIG
- Proper barriers for modifying pwrup_Link
- Link to v2: https://lore.kernel.org/all/20260424151140.104093-1-robin.clark@oss.qualcomm.com
Changes in v2:
- Rework makefile magic based on Dmitry's suggestion, and add a2xx/a5xx
perfcntr tables (although only a6xx+ is supported at this point)
- Fix compile error for compilers that are picky about a struct that
only contains a flex array
- Drop a6xx_idle() under gpu->lock in a6xx_perfcntr_configure(), replace
with perfcntr_fence that sel_worker can check
- Add a7xx+ pwrup_reglist support for restoring SELect regs on exit from
IFPC. (a6xx doesn't support IFPC, and the pwrup_reglist works a bit
differently)
- Stop disabling IFPC when global counter stream is active.
- Link to v1: https://lore.kernel.org/all/20260420222621.417276-1-robin.clark@oss.qualcomm.com/
Rob Clark (16):
drm/msm: Remove obsolete perf infrastructure
drm/msm: Allow CAP_PERFMON for setting SYSPROF
drm/msm/adreno: Sync registers from mesa
drm/msm/registers: Sync gen_header.py from mesa
drm/msm/registers: Add perfcntr json
drm/msm: Add a6xx+ perfcntr tables
drm/msm: Add sysprof accessors
drm/msm/a6xx: Add yield & flush helper
drm/msm: Add per-context perfcntr state
drm/msm: Add basic perfcntr infrastructure
drm/msm/a6xx+: Add support to configure perfcntrs
drm/msm/a8xx: Add perfcntr flush sequence
drm/msm: Add PERFCNTR_CONFIG ioctl
drm/msm/a6xx: Increase pwrup_reglist size
drm/msm/a6xx: Append SEL regs to dyn pwrup reglist
drm/msm/a6xx: Allow IFPC with perfcntr stream
drivers/gpu/drm/msm/Makefile | 27 +-
drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 7 -
drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 16 -
drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 3 -
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 16 +-
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 10 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 217 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 15 +-
drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +-
drivers/gpu/drm/msm/adreno/a8xx_gpu.c | 33 +-
drivers/gpu/drm/msm/adreno/a8xx_preempt.c | 2 +-
drivers/gpu/drm/msm/adreno/adreno_device.c | 8 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +-
drivers/gpu/drm/msm/msm_debugfs.c | 6 -
drivers/gpu/drm/msm/msm_drv.c | 2 +-
drivers/gpu/drm/msm/msm_drv.h | 13 +-
drivers/gpu/drm/msm/msm_gpu.c | 119 +-
drivers/gpu/drm/msm/msm_gpu.h | 104 +-
drivers/gpu/drm/msm/msm_perf.c | 235 --
drivers/gpu/drm/msm/msm_perfcntr.c | 650 ++++++
drivers/gpu/drm/msm/msm_perfcntr.h | 155 ++
drivers/gpu/drm/msm/msm_ringbuffer.h | 2 +
drivers/gpu/drm/msm/msm_submitqueue.c | 3 +-
.../msm/registers/adreno/a2xx_perfcntrs.json | 109 +
drivers/gpu/drm/msm/registers/adreno/a3xx.xml | 8 +-
drivers/gpu/drm/msm/registers/adreno/a5xx.xml | 141 +-
.../msm/registers/adreno/a5xx_perfcntrs.json | 128 +
drivers/gpu/drm/msm/registers/adreno/a6xx.xml | 1300 ++++++-----
.../msm/registers/adreno/a6xx_descriptors.xml | 71 +-
.../drm/msm/registers/adreno/a6xx_enums.xml | 3 +
.../msm/registers/adreno/a6xx_perfcntrs.json | 105 +
.../msm/registers/adreno/a7xx_perfcntrs.json | 228 ++
.../msm/registers/adreno/a8xx_descriptors.xml | 96 +-
.../msm/registers/adreno/a8xx_perfcntrs.json | 240 ++
.../msm/registers/adreno/a8xx_perfcntrs.xml | 1929 +++++++++++++++
.../msm/registers/adreno/adreno_common.xml | 42 +
.../drm/msm/registers/adreno/adreno_pm4.xml | 50 +-
drivers/gpu/drm/msm/registers/gen_header.py | 2079 +++++++++--------
include/uapi/drm/msm_drm.h | 48 +
39 files changed, 6017 insertions(+), 2212 deletions(-)
delete mode 100644 drivers/gpu/drm/msm/msm_perf.c
create mode 100644 drivers/gpu/drm/msm/msm_perfcntr.c
create mode 100644 drivers/gpu/drm/msm/msm_perfcntr.h
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a2xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a5xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a6xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a7xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a8xx_perfcntrs.json
create mode 100644 drivers/gpu/drm/msm/registers/adreno/a8xx_perfcntrs.xml
--
2.54.0
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v4 04/16] drm/msm/registers: Sync gen_header.py from mesa
2026-05-06 17:10 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
@ 2026-05-06 17:10 ` Rob Clark
0 siblings, 0 replies; 3+ messages in thread
From: Rob Clark @ 2026-05-06 17:10 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Akhil P Oommen, Rob Clark,
Dmitry Baryshkov, Dmitry Baryshkov, Abhinav Kumar, Jessica Zhang,
Sean Paul, Marijn Suijten, David Airlie, Simona Vetter,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
open list,
open list:CLANG/LLVM BUILD SUPPORT:Keyword:b(?i:clang|llvm)b
Update gen_header.py to bring in support for generating perfcntr tables.
Sync from https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/40522
Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
---
| 2079 ++++++++++---------
1 file changed, 1146 insertions(+), 933 deletions(-)
--git a/drivers/gpu/drm/msm/registers/gen_header.py b/drivers/gpu/drm/msm/registers/gen_header.py
index 2acad951f1e2..07e6f0cb4e66 100644
--- a/drivers/gpu/drm/msm/registers/gen_header.py
+++ b/drivers/gpu/drm/msm/registers/gen_header.py
@@ -11,997 +11,1210 @@ import collections
import argparse
import time
import datetime
+import json
+
class Error(Exception):
- def __init__(self, message):
- self.message = message
+ def __init__(self, message):
+ self.message = message
+
class Enum(object):
- def __init__(self, name):
- self.name = name
- self.values = []
-
- def has_name(self, name):
- for (n, value) in self.values:
- if n == name:
- return True
- return False
-
- def names(self):
- return [n for (n, value) in self.values]
-
- def dump(self, is_deprecated):
- use_hex = False
- for (name, value) in self.values:
- if value > 0x1000:
- use_hex = True
-
- print("enum %s {" % self.name)
- for (name, value) in self.values:
- if use_hex:
- print("\t%s = 0x%08x," % (name, value))
- else:
- print("\t%s = %d," % (name, value))
- print("};\n")
-
- def dump_pack_struct(self, is_deprecated):
- pass
+ def __init__(self, name):
+ self.name = name
+ self.values = []
+
+ def has_name(self, name):
+ for (n, value) in self.values:
+ if n == name:
+ return True
+ return False
+
+ def names(self):
+ return [n for (n, value) in self.values]
+
+ def value(self, name):
+ for (n, v) in self.values:
+ if n == name:
+ return v
+
+ def dump(self, has_variants):
+ use_hex = False
+ for (name, value) in self.values:
+ if value > 0x1000:
+ use_hex = True
+
+ print("enum %s {" % self.name)
+ for (name, value) in self.values:
+ if use_hex:
+ print("\t%s = 0x%08x," % (name, value))
+ else:
+ print("\t%s = %d," % (name, value))
+ print("};\n")
+
+ def dump_pack_struct(self, has_variants):
+ pass
+
class Field(object):
- def __init__(self, name, low, high, shr, type, parser):
- self.name = name
- self.low = low
- self.high = high
- self.shr = shr
- self.type = type
-
- builtin_types = [ None, "a3xx_regid", "boolean", "uint", "hex", "int", "fixed", "ufixed", "float", "address", "waddress" ]
-
- maxpos = parser.current_bitsize - 1
-
- if low < 0 or low > maxpos:
- raise parser.error("low attribute out of range: %d" % low)
- if high < 0 or high > maxpos:
- raise parser.error("high attribute out of range: %d" % high)
- if high < low:
- raise parser.error("low is greater than high: low=%d, high=%d" % (low, high))
- if self.type == "boolean" and not low == high:
- raise parser.error("booleans should be 1 bit fields")
- elif self.type == "float" and not (high - low == 31 or high - low == 15):
- raise parser.error("floats should be 16 or 32 bit fields")
- elif self.type not in builtin_types and self.type not in parser.enums:
- raise parser.error("unknown type '%s'" % self.type)
-
- def ctype(self, var_name):
- if self.type is None:
- type = "uint32_t"
- val = var_name
- elif self.type == "boolean":
- type = "bool"
- val = var_name
- elif self.type == "uint" or self.type == "hex" or self.type == "a3xx_regid":
- type = "uint32_t"
- val = var_name
- elif self.type == "int":
- type = "int32_t"
- val = var_name
- elif self.type == "fixed":
- type = "float"
- val = "((int32_t)(%s * %d.0))" % (var_name, 1 << self.radix)
- elif self.type == "ufixed":
- type = "float"
- val = "((uint32_t)(%s * %d.0))" % (var_name, 1 << self.radix)
- elif self.type == "float" and self.high - self.low == 31:
- type = "float"
- val = "fui(%s)" % var_name
- elif self.type == "float" and self.high - self.low == 15:
- type = "float"
- val = "_mesa_float_to_half(%s)" % var_name
- elif self.type in [ "address", "waddress" ]:
- type = "uint64_t"
- val = var_name
- else:
- type = "enum %s" % self.type
- val = var_name
-
- if self.shr > 0:
- val = "(%s >> %d)" % (val, self.shr)
-
- return (type, val)
+ def __init__(self, name, low, high, shr, type, parser):
+ self.name = name
+ self.low = low
+ self.high = high
+ self.shr = shr
+ self.type = type
+
+ builtin_types = [None, "a3xx_regid", "boolean", "uint", "hex",
+ "int", "fixed", "ufixed", "float", "address", "waddress"]
+
+ maxpos = parser.current_bitsize - 1
+
+ if low < 0 or low > maxpos:
+ raise parser.error("low attribute out of range: %d" % low)
+ if high < 0 or high > maxpos:
+ raise parser.error("high attribute out of range: %d" % high)
+ if high < low:
+ raise parser.error(
+ "low is greater than high: low=%d, high=%d" % (low, high))
+ if self.type == "boolean" and not low == high:
+ raise parser.error("booleans should be 1 bit fields")
+ elif self.type == "float" and not (high - low == 31 or high - low == 15):
+ raise parser.error("floats should be 16 or 32 bit fields")
+ elif self.type not in builtin_types and self.type not in parser.enums:
+ raise parser.error("unknown type '%s'" % self.type)
+
+ def ctype(self, var_name):
+ if self.type is None:
+ type = "uint32_t"
+ val = var_name
+ elif self.type == "boolean":
+ type = "bool"
+ val = var_name
+ elif self.type == "uint" or self.type == "hex" or self.type == "a3xx_regid":
+ type = "uint32_t"
+ val = var_name
+ elif self.type == "int":
+ type = "int32_t"
+ val = var_name
+ elif self.type == "fixed":
+ type = "float"
+ val = "(uint32_t)((int32_t)(%s * %d.0))" % (var_name, 1 << self.radix)
+ elif self.type == "ufixed":
+ type = "float"
+ val = "((uint32_t)(%s * %d.0))" % (var_name, 1 << self.radix)
+ elif self.type == "float" and self.high - self.low == 31:
+ type = "float"
+ val = "fui(%s)" % var_name
+ elif self.type == "float" and self.high - self.low == 15:
+ type = "float"
+ val = "_mesa_float_to_half(%s)" % var_name
+ elif self.type in ["address", "waddress"]:
+ type = "uint64_t"
+ val = var_name
+ else:
+ type = "enum %s" % self.type
+ val = var_name
+
+ if self.shr > 0:
+ val = "(%s >> %d)" % (val, self.shr)
+
+ return (type, val)
+
def tab_to(name, value):
- tab_count = (68 - (len(name) & ~7)) // 8
- if tab_count <= 0:
- tab_count = 1
- print(name + ('\t' * tab_count) + value)
+ tab_count = (68 - (len(name) & ~7)) // 8
+ if tab_count <= 0:
+ tab_count = 1
+ print(name + ('\t' * tab_count) + value)
+
+def define_macro(name, value, has_variants):
+ if has_variants:
+ value = "__FD_DEPRECATED " + value
+ tab_to(name, value)
def mask(low, high):
- return ((0xffffffffffffffff >> (64 - (high + 1 - low))) << low)
+ return ((0xffffffffffffffff >> (64 - (high + 1 - low))) << low)
+
def field_name(reg, f):
- if f.name:
- name = f.name.lower()
- else:
- # We hit this path when a reg is defined with no bitset fields, ie.
- # <reg32 offset="0x88db" name="RB_RESOLVE_SYSTEM_BUFFER_ARRAY_PITCH" low="0" high="28" shr="6" type="uint"/>
- name = reg.name.lower()
+ if f.name:
+ name = f.name.lower()
+ else:
+ # We hit this path when a reg is defined with no bitset fields, ie.
+ # <reg32 offset="0x88db" name="RB_RESOLVE_SYSTEM_BUFFER_ARRAY_PITCH" low="0" high="28" shr="6" type="uint"/>
+ name = reg.name.lower()
- if (name in [ "double", "float", "int" ]) or not (name[0].isalpha()):
- name = "_" + name
+ if (name in ["double", "float", "int"]) or not (name[0].isalpha()):
+ name = "_" + name
- return name
+ return name
# indices - array of (ctype, stride, __offsets_NAME)
+
+
def indices_varlist(indices):
- return ", ".join(["i%d" % i for i in range(len(indices))])
+ return ", ".join(["i%d" % i for i in range(len(indices))])
+
def indices_prototype(indices):
- return ", ".join(["%s i%d" % (ctype, idx)
- for (idx, (ctype, stride, offset)) in enumerate(indices)])
+ return ", ".join(["%s i%d" % (ctype, idx)
+ for (idx, (ctype, stride, offset)) in enumerate(indices)])
+
def indices_strides(indices):
- return " + ".join(["0x%x*i%d" % (stride, idx)
- if stride else
- "%s(i%d)" % (offset, idx)
- for (idx, (ctype, stride, offset)) in enumerate(indices)])
+ return " + ".join(["0x%x*i%d" % (stride, idx)
+ if stride else
+ "%s(i%d)" % (offset, idx)
+ for (idx, (ctype, stride, offset)) in enumerate(indices)])
+
def is_number(str):
- try:
- int(str)
- return True
- except ValueError:
- return False
+ try:
+ int(str)
+ return True
+ except ValueError:
+ return False
+
def sanitize_variant(variant):
- if variant and "-" in variant:
- return variant[:variant.index("-")]
- return variant
+ if variant and "-" in variant:
+ return variant[:variant.index("-")]
+ return variant
+
class Bitset(object):
- def __init__(self, name, template):
- self.name = name
- self.inline = False
- self.reg = None
- if template:
- self.fields = template.fields[:]
- else:
- self.fields = []
-
- # Get address field if there is one in the bitset, else return None:
- def get_address_field(self):
- for f in self.fields:
- if f.type in [ "address", "waddress" ]:
- return f
- return None
-
- def dump_regpair_builder(self, reg):
- print("#ifndef NDEBUG")
- known_mask = 0
- for f in self.fields:
- known_mask |= mask(f.low, f.high)
- if f.type in [ "boolean", "address", "waddress" ]:
- continue
- type, val = f.ctype("fields.%s" % field_name(reg, f))
- print(" assert((%-40s & 0x%08x) == 0);" % (val, 0xffffffff ^ mask(0 , f.high - f.low)))
- print(" assert((%-40s & 0x%08x) == 0);" % ("fields.unknown", known_mask))
- print("#endif\n")
-
- print(" return (struct fd_reg_pair) {")
- print(" .reg = (uint32_t)%s," % reg.reg_offset())
- print(" .value =")
- cast = "(uint64_t)" if reg.bit_size == 64 else ""
- for f in self.fields:
- if f.type in [ "address", "waddress" ]:
- continue
- else:
- type, val = f.ctype("fields.%s" % field_name(reg, f))
- print(" (%s%-40s << %2d) |" % (cast, val, f.low))
- value_name = "dword"
- if reg.bit_size == 64:
- value_name = "qword"
- print(" fields.unknown | fields.%s," % (value_name,))
-
- address = self.get_address_field()
- if address:
- print(" .bo = fields.bo,")
- print(" .is_address = true,")
- if f.type == "waddress":
- print(" .bo_write = true,")
- print(" .bo_offset = fields.bo_offset,")
- print(" .bo_shift = %d," % address.shr)
- print(" .bo_low = %d," % address.low)
-
- print(" };")
-
- def dump_pack_struct(self, is_deprecated, reg=None):
- if not reg:
- return
-
- prefix = reg.full_name
-
- print("struct %s {" % prefix)
- for f in self.fields:
- if f.type in [ "address", "waddress" ]:
- tab_to(" __bo_type", "bo;")
- tab_to(" uint32_t", "bo_offset;")
- continue
- name = field_name(reg, f)
-
- type, val = f.ctype("var")
-
- tab_to(" %s" % type, "%s;" % name)
- if reg.bit_size == 64:
- tab_to(" uint64_t", "unknown;")
- tab_to(" uint64_t", "qword;")
- else:
- tab_to(" uint32_t", "unknown;")
- tab_to(" uint32_t", "dword;")
- print("};\n")
-
- depcrstr = ""
- if is_deprecated:
- depcrstr = " FD_DEPRECATED"
- if reg.array:
- print("static inline%s struct fd_reg_pair\npack_%s(uint32_t __i, struct %s fields)\n{" %
- (depcrstr, prefix, prefix))
- else:
- print("static inline%s struct fd_reg_pair\npack_%s(struct %s fields)\n{" %
- (depcrstr, prefix, prefix))
-
- self.dump_regpair_builder(reg)
-
- print("\n}\n")
-
- if self.get_address_field():
- skip = ", { .reg = 0 }"
- else:
- skip = ""
-
- if reg.array:
- print("#define %s(__i, ...) pack_%s(__i, __struct_cast(%s) { __VA_ARGS__ })%s\n" %
- (prefix, prefix, prefix, skip))
- else:
- print("#define %s(...) pack_%s(__struct_cast(%s) { __VA_ARGS__ })%s\n" %
- (prefix, prefix, prefix, skip))
-
-
- def dump(self, is_deprecated, prefix=None, reg=None):
- if prefix is None:
- prefix = self.name
- reg64 = reg and self.reg and self.reg.bit_size == 64
- if reg64:
- print("static inline uint32_t %s_LO(uint32_t val)\n{" % prefix)
- print("\treturn val;\n}")
- print("static inline uint32_t %s_HI(uint32_t val)\n{" % prefix)
- print("\treturn val;\n}")
- for f in self.fields:
- if f.name:
- name = prefix + "_" + f.name
- else:
- name = prefix
-
- if not f.name and f.low == 0 and f.shr == 0 and f.type not in ["float", "fixed", "ufixed"]:
- pass
- elif f.type == "boolean" or (f.type is None and f.low == f.high):
- tab_to("#define %s" % name, "0x%08x" % (1 << f.low))
- else:
- typespec = "ull" if reg64 else "u"
- tab_to("#define %s__MASK" % name, "0x%08x%s" % (mask(f.low, f.high), typespec))
- tab_to("#define %s__SHIFT" % name, "%d" % f.low)
- type, val = f.ctype("val")
- ret_type = "uint64_t" if reg64 else "uint32_t"
- cast = "(uint64_t)" if reg64 else ""
-
- print("static inline %s %s(%s val)\n{" % (ret_type, name, type))
- if f.shr > 0:
- print("\tassert(!(val & 0x%x));" % mask(0, f.shr - 1))
- print("\treturn (%s(%s) << %s__SHIFT) & %s__MASK;\n}" % (cast, val, name, name))
- print()
+ def __init__(self, name, template):
+ self.name = name
+ self.inline = False
+ self.reg = None
+ if template:
+ self.fields = template.fields[:]
+ else:
+ self.fields = []
+
+ # Get address field if there is one in the bitset, else return None:
+ def get_address_field(self):
+ for f in self.fields:
+ if f.type in ["address", "waddress"]:
+ return f
+ return None
+
+ def dump_regpair_builder(self, reg):
+ print("#ifndef NDEBUG")
+ known_mask = 0
+ for f in self.fields:
+ known_mask |= mask(f.low, f.high)
+ if f.type in ["boolean", "address", "waddress"]:
+ continue
+ type, val = f.ctype("fields.%s" % field_name(reg, f))
+ print(" assert((%-40s & 0x%08x) == 0);" %
+ (val, 0xffffffff ^ mask(0, f.high - f.low)))
+ print(" assert((%-40s & 0x%08x) == 0);" %
+ ("fields.unknown", known_mask))
+ print("#endif\n")
+
+ print(" return (struct fd_reg_pair) {")
+ print(" .reg = (uint32_t)%s," % reg.reg_offset())
+ print(" .value =")
+ cast = "(uint64_t)" if reg.bit_size == 64 else ""
+ for f in self.fields:
+ if f.type in ["address", "waddress"]:
+ continue
+ else:
+ type, val = f.ctype("fields.%s" % field_name(reg, f))
+ print(" (%s%-40s << %2d) |" % (cast, val, f.low))
+ value_name = "dword"
+ if reg.bit_size == 64:
+ value_name = "qword"
+ print(" fields.unknown | fields.%s," % (value_name,))
+
+ address = self.get_address_field()
+ if address:
+ print("#ifndef TU_CS_H")
+ print(" .bo = fields.bo,")
+ print(" .is_address = true,")
+ print(" .bo_offset = fields.bo_offset,")
+ print(" .bo_shift = %d," % address.shr)
+ print(" .bo_low = %d," % address.low)
+ print("#else")
+ print(" .is_address = true,")
+ print("#endif")
+
+ print(" };")
+
+ def dump_pack_struct(self, has_variants, reg=None):
+ if not reg:
+ return
+
+ prefix = reg.full_name
+
+ constexpr_mark = " CONSTEXPR"
+
+ print("struct %s {" % prefix)
+ for f in self.fields:
+ if f.type in ["address", "waddress"]:
+ print("#ifndef TU_CS_H")
+ tab_to(" __bo_type", "bo;")
+ tab_to(" uint32_t", "bo_offset;")
+ print("#endif\n")
+ continue
+ name = field_name(reg, f)
+
+ type, val = f.ctype("var")
+
+ tab_to(" %s" % type, "%s;" % name)
+
+ if f.type == "float":
+ # Requires using `fui()` or `_mesa_float_to_half()`
+ constexpr_mark = ""
+ if reg.bit_size == 64:
+ tab_to(" uint64_t", "qword;")
+ tab_to(" uint64_t", "unknown;")
+ else:
+ tab_to(" uint32_t", "dword;")
+ tab_to(" uint32_t", "unknown;")
+ print("};\n")
+
+ if not has_variants:
+ print("static%s inline struct fd_reg_pair" % constexpr_mark)
+ if reg.array:
+ print("pack_%s(uint32_t __i, struct %s fields)\n{" % (prefix, prefix))
+ else:
+ print("pack_%s(struct %s fields)\n{" % (prefix, prefix))
+
+ self.dump_regpair_builder(reg)
+
+ print("\n}\n")
+
+ if self.get_address_field():
+ skip = ", { .reg = 0 }"
+ else:
+ skip = ""
+
+ if reg.array:
+ print("#define %s(__i, ...) pack_%s(__i, __struct_cast(%s) { __VA_ARGS__ })%s\n" %
+ (prefix, prefix, prefix, skip))
+ else:
+ print("#define %s(...) pack_%s(__struct_cast(%s) { __VA_ARGS__ })%s\n" %
+ (prefix, prefix, prefix, skip))
+
+ def dump(self, has_variants, prefix=None, reg=None):
+ if prefix is None:
+ prefix = self.name
+ suffix = ""
+ if self.reg and self.reg.bit_size == 64:
+ print(
+ "static CONSTEXPR inline uint32_t %s_LO(uint32_t val)\n{" % prefix)
+ print("\treturn val;\n}")
+ print(
+ "static CONSTEXPR inline uint32_t %s_HI(uint32_t val)\n{" % prefix)
+ print("\treturn val;\n}")
+ suffix = "ull"
+
+ for f in self.fields:
+ if f.name:
+ name = prefix + "_" + f.name
+ else:
+ name = prefix
+
+ if not f.name and f.low == 0 and f.shr == 0 and f.type not in ["float", "fixed", "ufixed"]:
+ pass
+ elif f.type == "boolean" or (f.type is None and f.low == f.high):
+ tab_to("#define %s" % name, "0x%08x%s" % ((1 << f.low), suffix))
+ else:
+ tab_to("#define %s__MASK" %
+ name, "0x%08x%s" % (mask(f.low, f.high), suffix))
+ tab_to("#define %s__SHIFT" % name, "%d" % f.low)
+ type, val = f.ctype("val")
+ ret_type = "uint64_t" if reg and reg.bit_size == 64 else "uint32_t"
+ cast = "(uint64_t)" if reg and reg.bit_size == 64 else ""
+
+ constexpr_mark = "" if type == "float" else " CONSTEXPR"
+ print("static%s inline %s %s(%s val)\n{" % (
+ constexpr_mark, ret_type, name, type))
+ if f.shr > 0:
+ print("\tassert(!(val & 0x%x));" % mask(0, f.shr - 1))
+ print("\treturn (%s(%s) << %s__SHIFT) & %s__MASK;\n}" %
+ (cast, val, name, name))
+ print()
+
class Array(object):
- def __init__(self, attrs, domain, variant, parent, index_type):
- if "name" in attrs:
- self.local_name = attrs["name"]
- else:
- self.local_name = ""
- self.domain = domain
- self.variant = variant
- self.parent = parent
- self.children = []
- if self.parent:
- self.name = self.parent.name + "_" + self.local_name
- else:
- self.name = self.local_name
- if "offsets" in attrs:
- self.offsets = map(lambda i: "0x%08x" % int(i, 0), attrs["offsets"].split(","))
- self.fixed_offsets = True
- elif "doffsets" in attrs:
- self.offsets = map(lambda s: "(%s)" % s , attrs["doffsets"].split(","))
- self.fixed_offsets = True
- else:
- self.offset = int(attrs["offset"], 0)
- self.stride = int(attrs["stride"], 0)
- self.fixed_offsets = False
- if "index" in attrs:
- self.index_type = index_type
- else:
- self.index_type = None
- self.length = int(attrs["length"], 0)
- if "usage" in attrs:
- self.usages = attrs["usage"].split(',')
- else:
- self.usages = None
-
- def index_ctype(self):
- if not self.index_type:
- return "uint32_t"
- else:
- return "enum %s" % self.index_type.name
-
- # Generate array of (ctype, stride, __offsets_NAME)
- def indices(self):
- if self.parent:
- indices = self.parent.indices()
- else:
- indices = []
- if self.length != 1:
- if self.fixed_offsets:
- indices.append((self.index_ctype(), None, "__offset_%s" % self.local_name))
- else:
- indices.append((self.index_ctype(), self.stride, None))
- return indices
-
- def total_offset(self):
- offset = 0
- if not self.fixed_offsets:
- offset += self.offset
- if self.parent:
- offset += self.parent.total_offset()
- return offset
-
- def dump(self, is_deprecated):
- depcrstr = ""
- if is_deprecated:
- depcrstr = " FD_DEPRECATED"
- proto = indices_varlist(self.indices())
- strides = indices_strides(self.indices())
- array_offset = self.total_offset()
- if self.fixed_offsets:
- print("static inline%s uint32_t __offset_%s(%s idx)" % (depcrstr, self.local_name, self.index_ctype()))
- print("{\n\tswitch (idx) {")
- if self.index_type:
- for val, offset in zip(self.index_type.names(), self.offsets):
- print("\t\tcase %s: return %s;" % (val, offset))
- else:
- for idx, offset in enumerate(self.offsets):
- print("\t\tcase %d: return %s;" % (idx, offset))
- print("\t\tdefault: return INVALID_IDX(idx);")
- print("\t}\n}")
- if proto == '':
- tab_to("#define REG_%s_%s" % (self.domain, self.name), "0x%08x\n" % array_offset)
- else:
- tab_to("#define REG_%s_%s(%s)" % (self.domain, self.name, proto), "(0x%08x + %s )\n" % (array_offset, strides))
-
- def dump_pack_struct(self, is_deprecated):
- pass
-
- def dump_regpair_builder(self):
- pass
+ def __init__(self, attrs, domain, variant, parent, index_type):
+ if "name" in attrs:
+ self.local_name = attrs["name"]
+ else:
+ self.local_name = ""
+ self.domain = domain
+ self.variant = variant
+ self.parent = parent
+ self.children = []
+ if self.parent:
+ self.name = self.parent.name + "_" + self.local_name
+ else:
+ self.name = self.local_name
+ if "offsets" in attrs:
+ self.offsets = map(lambda i: "0x%08x" %
+ int(i, 0), attrs["offsets"].split(","))
+ self.fixed_offsets = True
+ elif "doffsets" in attrs:
+ self.offsets = map(lambda s: "(%s)" %
+ s, attrs["doffsets"].split(","))
+ self.fixed_offsets = True
+ else:
+ self.offset = int(attrs["offset"], 0)
+ self.stride = int(attrs["stride"], 0)
+ self.fixed_offsets = False
+ if "index" in attrs:
+ self.index_type = index_type
+ else:
+ self.index_type = None
+ self.length = int(attrs["length"], 0)
+ if "usage" in attrs:
+ self.usages = attrs["usage"].split(',')
+ else:
+ self.usages = None
+
+ def index_ctype(self):
+ if not self.index_type:
+ return "uint32_t"
+ else:
+ return "enum %s" % self.index_type.name
+
+ # Generate array of (ctype, stride, __offsets_NAME)
+ def indices(self):
+ if self.parent:
+ indices = self.parent.indices()
+ else:
+ indices = []
+ if self.length != 1:
+ if self.fixed_offsets:
+ indices.append((self.index_ctype(), None,
+ "__offset_%s" % self.local_name))
+ else:
+ indices.append((self.index_ctype(), self.stride, None))
+ return indices
+
+ def total_offset(self):
+ offset = 0
+ if not self.fixed_offsets:
+ offset += self.offset
+ if self.parent:
+ offset += self.parent.total_offset()
+ return offset
+
+ def dump(self, has_variants):
+ proto = indices_varlist(self.indices())
+ strides = indices_strides(self.indices())
+ array_offset = self.total_offset()
+ if self.fixed_offsets and not has_variants:
+ print("static CONSTEXPR inline uint32_t __offset_%s(%s idx)" %
+ (self.local_name, self.index_ctype()))
+ print("{\n\tswitch (idx) {")
+ if self.index_type:
+ for val, offset in zip(self.index_type.names(), self.offsets):
+ print("\t\tcase %s: return %s;" % (val, offset))
+ else:
+ for idx, offset in enumerate(self.offsets):
+ print("\t\tcase %d: return %s;" % (idx, offset))
+ print("\t\tdefault: return INVALID_IDX(idx);")
+ print("\t}\n}")
+ if proto == '':
+ define_macro("#define REG_%s_%s" %
+ (self.domain, self.name), "0x%08x\n" % array_offset,
+ has_variants)
+ else:
+ define_macro("#define REG_%s_%s(%s)" % (self.domain, self.name,
+ proto), "(0x%08x + %s )\n" % (array_offset, strides),
+ has_variants)
+
+ def dump_pack_struct(self, has_variants):
+ pass
+
+ def dump_regpair_builder(self):
+ pass
+
class Reg(object):
- def __init__(self, attrs, domain, array, bit_size):
- self.name = attrs["name"]
- self.domain = domain
- self.array = array
- self.offset = int(attrs["offset"], 0)
- self.type = None
- self.bit_size = bit_size
- if array:
- self.name = array.name + "_" + self.name
- array.children.append(self)
- self.full_name = self.domain + "_" + self.name
- if "stride" in attrs:
- self.stride = int(attrs["stride"], 0)
- self.length = int(attrs["length"], 0)
- else:
- self.stride = None
- self.length = None
-
- # Generate array of (ctype, stride, __offsets_NAME)
- def indices(self):
- if self.array:
- indices = self.array.indices()
- else:
- indices = []
- if self.stride:
- indices.append(("uint32_t", self.stride, None))
- return indices
-
- def total_offset(self):
- if self.array:
- return self.array.total_offset() + self.offset
- else:
- return self.offset
-
- def reg_offset(self):
- if self.array:
- offset = self.array.offset + self.offset
- return "(0x%08x + 0x%x*__i)" % (offset, self.array.stride)
- return "0x%08x" % self.offset
-
- def dump(self, is_deprecated):
- depcrstr = ""
- if is_deprecated:
- depcrstr = " FD_DEPRECATED "
- proto = indices_prototype(self.indices())
- strides = indices_strides(self.indices())
- offset = self.total_offset()
- if proto == '':
- tab_to("#define REG_%s" % self.full_name, "0x%08x" % offset)
- else:
- print("static inline%s uint32_t REG_%s(%s) { return 0x%08x + %s; }" % (depcrstr, self.full_name, proto, offset, strides))
-
- if self.bitset.inline:
- self.bitset.dump(is_deprecated, self.full_name, self)
- print("")
-
- def dump_pack_struct(self, is_deprecated):
- if self.bitset.inline:
- self.bitset.dump_pack_struct(is_deprecated, self)
-
- def dump_regpair_builder(self):
- self.bitset.dump_regpair_builder(self)
-
- def dump_py(self):
- print("\tREG_%s = 0x%08x" % (self.full_name, self.offset))
+ def __init__(self, attrs, domain, array, bit_size):
+ self.name = attrs["name"]
+ self.domain = domain
+ self.array = array
+ self.offset = int(attrs["offset"], 0)
+ self.type = None
+ self.bit_size = bit_size
+ if array:
+ self.name = array.name + "_" + self.name
+ array.children.append(self)
+ self.full_name = self.domain + "_" + self.name
+ if "stride" in attrs:
+ self.stride = int(attrs["stride"], 0)
+ self.length = int(attrs["length"], 0)
+ else:
+ self.stride = None
+ self.length = None
+
+ # Generate array of (ctype, stride, __offsets_NAME)
+ def indices(self):
+ if self.array:
+ indices = self.array.indices()
+ else:
+ indices = []
+ if self.stride:
+ indices.append(("uint32_t", self.stride, None))
+ return indices
+
+ def total_offset(self):
+ if self.array:
+ return self.array.total_offset() + self.offset
+ else:
+ return self.offset
+
+ def reg_offset(self):
+ if self.array:
+ offset = self.array.offset + self.offset
+ return "(0x%08x + 0x%x*__i)" % (offset, self.array.stride)
+ return "0x%08x" % self.offset
+
+ def dump(self, has_variants):
+ proto = indices_prototype(self.indices())
+ strides = indices_strides(self.indices())
+ offset = self.total_offset()
+ if proto == '':
+ define_macro("#define REG_%s" % self.full_name, "0x%08x" % offset, has_variants)
+ elif not has_variants:
+ depcrstr = ""
+ if has_variants:
+ depcrstr = " __FD_DEPRECATED "
+ print("static CONSTEXPR inline%s uint32_t REG_%s(%s) { return 0x%08x + %s; }" % (
+ depcrstr, self.full_name, proto, offset, strides))
+
+ if self.bitset.inline:
+ self.bitset.dump(has_variants, self.full_name, self)
+ print("")
+
+ def dump_pack_struct(self, has_variants):
+ if self.bitset.inline:
+ self.bitset.dump_pack_struct(has_variants, self)
+
+ def dump_regpair_builder(self):
+ self.bitset.dump_regpair_builder(self)
+
+ def dump_py(self):
+ offset = self.offset
+ if self.array:
+ offset += self.array.offset
+ print("\tREG_%s = 0x%08x" % (self.full_name, offset))
class Parser(object):
- def __init__(self):
- self.current_array = None
- self.current_domain = None
- self.current_prefix = None
- self.current_prefix_type = None
- self.current_stripe = None
- self.current_bitset = None
- self.current_bitsize = 32
- # The varset attribute on the domain specifies the enum which
- # specifies all possible hw variants:
- self.current_varset = None
- # Regs that have multiple variants.. we only generated the C++
- # template based struct-packers for these
- self.variant_regs = {}
- # Information in which contexts regs are used, to be used in
- # debug options
- self.usage_regs = collections.defaultdict(list)
- self.bitsets = {}
- self.enums = {}
- self.variants = set()
- self.file = []
- self.xml_files = []
-
- def error(self, message):
- parser, filename = self.stack[-1]
- return Error("%s:%d:%d: %s" % (filename, parser.CurrentLineNumber, parser.CurrentColumnNumber, message))
-
- def prefix(self, variant=None):
- if self.current_prefix_type == "variant" and variant:
- return sanitize_variant(variant)
- elif self.current_stripe:
- return self.current_stripe + "_" + self.current_domain
- elif self.current_prefix:
- return self.current_prefix + "_" + self.current_domain
- else:
- return self.current_domain
-
- def parse_field(self, name, attrs):
- try:
- if "pos" in attrs:
- high = low = int(attrs["pos"], 0)
- elif "high" in attrs and "low" in attrs:
- high = int(attrs["high"], 0)
- low = int(attrs["low"], 0)
- else:
- low = 0
- high = self.current_bitsize - 1
-
- if "type" in attrs:
- type = attrs["type"]
- else:
- type = None
-
- if "shr" in attrs:
- shr = int(attrs["shr"], 0)
- else:
- shr = 0
-
- b = Field(name, low, high, shr, type, self)
-
- if type == "fixed" or type == "ufixed":
- b.radix = int(attrs["radix"], 0)
-
- self.current_bitset.fields.append(b)
- except ValueError as e:
- raise self.error(e)
-
- def parse_varset(self, attrs):
- # Inherit the varset from the enclosing domain if not overriden:
- varset = self.current_varset
- if "varset" in attrs:
- varset = self.enums[attrs["varset"]]
- return varset
-
- def parse_variants(self, attrs):
- if "variants" not in attrs:
- return None
-
- variant = attrs["variants"].split(",")[0]
- varset = self.parse_varset(attrs)
-
- if "-" in variant:
- # if we have a range, validate that both the start and end
- # of the range are valid enums:
- start = variant[:variant.index("-")]
- end = variant[variant.index("-") + 1:]
- assert varset.has_name(start)
- if end != "":
- assert varset.has_name(end)
- else:
- assert varset.has_name(variant)
-
- return variant
-
- def add_all_variants(self, reg, attrs, parent_variant):
- # TODO this should really handle *all* variants, including dealing
- # with open ended ranges (ie. "A2XX,A4XX-") (we have the varset
- # enum now to make that possible)
- variant = self.parse_variants(attrs)
- if not variant:
- variant = parent_variant
-
- if reg.name not in self.variant_regs:
- self.variant_regs[reg.name] = {}
- else:
- # All variants must be same size:
- v = next(iter(self.variant_regs[reg.name]))
- assert self.variant_regs[reg.name][v].bit_size == reg.bit_size
-
- self.variant_regs[reg.name][variant] = reg
-
- def add_all_usages(self, reg, usages):
- if not usages:
- return
-
- for usage in usages:
- self.usage_regs[usage].append(reg)
-
- self.variants.add(reg.domain)
-
- def do_validate(self, schemafile):
- if not self.validate:
- return
-
- try:
- from lxml import etree
-
- parser, filename = self.stack[-1]
- dirname = os.path.dirname(filename)
-
- # we expect this to look like <namespace url> schema.xsd.. I think
- # technically it is supposed to be just a URL, but that doesn't
- # quite match up to what we do.. Just skip over everything up to
- # and including the first whitespace character:
- schemafile = schemafile[schemafile.rindex(" ")+1:]
-
- # this is a bit cheezy, but the xml file to validate could be
- # in a child director, ie. we don't really know where the schema
- # file is, the way the rnn C code does. So if it doesn't exist
- # just look one level up
- if not os.path.exists(dirname + "/" + schemafile):
- schemafile = "../" + schemafile
-
- if not os.path.exists(dirname + "/" + schemafile):
- raise self.error("Cannot find schema for: " + filename)
-
- xmlschema_doc = etree.parse(dirname + "/" + schemafile)
- xmlschema = etree.XMLSchema(xmlschema_doc)
-
- xml_doc = etree.parse(filename)
- if not xmlschema.validate(xml_doc):
- error_str = str(xmlschema.error_log.filter_from_errors()[0])
- raise self.error("Schema validation failed for: " + filename + "\n" + error_str)
- except ImportError as e:
- print("lxml not found, skipping validation", file=sys.stderr)
-
- def do_parse(self, filename):
- filepath = os.path.abspath(filename)
- if filepath in self.xml_files:
- return
- self.xml_files.append(filepath)
- file = open(filename, "rb")
- parser = xml.parsers.expat.ParserCreate()
- self.stack.append((parser, filename))
- parser.StartElementHandler = self.start_element
- parser.EndElementHandler = self.end_element
- parser.CharacterDataHandler = self.character_data
- parser.buffer_text = True
- parser.ParseFile(file)
- self.stack.pop()
- file.close()
-
- def parse(self, rnn_path, filename, validate):
- self.path = rnn_path
- self.stack = []
- self.validate = validate
- self.do_parse(filename)
-
- def parse_reg(self, attrs, bit_size):
- self.current_bitsize = bit_size
- if "type" in attrs and attrs["type"] in self.bitsets:
- bitset = self.bitsets[attrs["type"]]
- if bitset.inline:
- self.current_bitset = Bitset(attrs["name"], bitset)
- self.current_bitset.inline = True
- else:
- self.current_bitset = bitset
- else:
- self.current_bitset = Bitset(attrs["name"], None)
- self.current_bitset.inline = True
- if "type" in attrs:
- self.parse_field(None, attrs)
-
- variant = self.parse_variants(attrs)
- if not variant and self.current_array:
- variant = self.current_array.variant
-
- self.current_reg = Reg(attrs, self.prefix(variant), self.current_array, bit_size)
- self.current_reg.bitset = self.current_bitset
- self.current_bitset.reg = self.current_reg
-
- if len(self.stack) == 1:
- self.file.append(self.current_reg)
-
- if variant is not None:
- self.add_all_variants(self.current_reg, attrs, variant)
-
- usages = None
- if "usage" in attrs:
- usages = attrs["usage"].split(',')
- elif self.current_array:
- usages = self.current_array.usages
-
- self.add_all_usages(self.current_reg, usages)
-
- def start_element(self, name, attrs):
- self.cdata = ""
- if name == "import":
- filename = attrs["file"]
- self.do_parse(os.path.join(self.path, filename))
- elif name == "domain":
- self.current_domain = attrs["name"]
- if "prefix" in attrs:
- self.current_prefix = sanitize_variant(self.parse_variants(attrs))
- self.current_prefix_type = attrs["prefix"]
- else:
- self.current_prefix = None
- self.current_prefix_type = None
- if "varset" in attrs:
- self.current_varset = self.enums[attrs["varset"]]
- elif name == "stripe":
- self.current_stripe = sanitize_variant(self.parse_variants(attrs))
- elif name == "enum":
- self.current_enum_value = 0
- self.current_enum = Enum(attrs["name"])
- self.enums[attrs["name"]] = self.current_enum
- if len(self.stack) == 1:
- self.file.append(self.current_enum)
- elif name == "value":
- if "value" in attrs:
- value = int(attrs["value"], 0)
- else:
- value = self.current_enum_value
- self.current_enum.values.append((attrs["name"], value))
- elif name == "reg32":
- self.parse_reg(attrs, 32)
- elif name == "reg64":
- self.parse_reg(attrs, 64)
- elif name == "array":
- self.current_bitsize = 32
- variant = self.parse_variants(attrs)
- index_type = self.enums[attrs["index"]] if "index" in attrs else None
- self.current_array = Array(attrs, self.prefix(variant), variant, self.current_array, index_type)
- if len(self.stack) == 1:
- self.file.append(self.current_array)
- elif name == "bitset":
- self.current_bitset = Bitset(attrs["name"], None)
- if "inline" in attrs and attrs["inline"] == "yes":
- self.current_bitset.inline = True
- self.bitsets[self.current_bitset.name] = self.current_bitset
- if len(self.stack) == 1 and not self.current_bitset.inline:
- self.file.append(self.current_bitset)
- elif name == "bitfield" and self.current_bitset:
- self.parse_field(attrs["name"], attrs)
- elif name == "database":
- self.do_validate(attrs["xsi:schemaLocation"])
-
- def end_element(self, name):
- if name == "domain":
- self.current_domain = None
- self.current_prefix = None
- self.current_prefix_type = None
- elif name == "stripe":
- self.current_stripe = None
- elif name == "bitset":
- self.current_bitset = None
- elif name == "reg32":
- self.current_reg = None
- elif name == "array":
- # if the array has no Reg children, push an implicit reg32:
- if len(self.current_array.children) == 0:
- attrs = {
- "name": "REG",
- "offset": "0",
- }
- self.parse_reg(attrs, 32)
- self.current_array = self.current_array.parent
- elif name == "enum":
- self.current_enum = None
-
- def character_data(self, data):
- self.cdata += data
-
- def dump_reg_usages(self):
- d = collections.defaultdict(list)
- for usage, regs in self.usage_regs.items():
- for reg in regs:
- variants = self.variant_regs.get(reg.name)
- if variants:
- for variant, vreg in variants.items():
- if reg == vreg:
- d[(usage, sanitize_variant(variant))].append(reg)
- else:
- for variant in self.variants:
- d[(usage, sanitize_variant(variant))].append(reg)
-
- print("#ifdef __cplusplus")
-
- for usage, regs in self.usage_regs.items():
- print("template<chip CHIP> constexpr inline uint16_t %s_REGS[] = {};" % (usage.upper()))
-
- for (usage, variant), regs in d.items():
- offsets = []
-
- for reg in regs:
- if reg.array:
- for i in range(reg.array.length):
- offsets.append(reg.array.offset + reg.offset + i * reg.array.stride)
- if reg.bit_size == 64:
- offsets.append(offsets[-1] + 1)
- else:
- offsets.append(reg.offset)
- if reg.bit_size == 64:
- offsets.append(offsets[-1] + 1)
-
- offsets.sort()
-
- print("template<> constexpr inline uint16_t %s_REGS<%s>[] = {" % (usage.upper(), variant))
- for offset in offsets:
- print("\t%s," % hex(offset))
- print("};")
-
- print("#endif")
-
- def has_variants(self, reg):
- return reg.name in self.variant_regs and not is_number(reg.name) and not is_number(reg.name[1:])
-
- def dump(self):
- enums = []
- bitsets = []
- regs = []
- for e in self.file:
- if isinstance(e, Enum):
- enums.append(e)
- elif isinstance(e, Bitset):
- bitsets.append(e)
- else:
- regs.append(e)
-
- for e in enums + bitsets + regs:
- e.dump(self.has_variants(e))
-
- self.dump_reg_usages()
-
-
- def dump_regs_py(self):
- regs = []
- for e in self.file:
- if isinstance(e, Reg):
- regs.append(e)
-
- for e in regs:
- e.dump_py()
-
-
- def dump_reg_variants(self, regname, variants):
- if is_number(regname) or is_number(regname[1:]):
- return
- print("#ifdef __cplusplus")
- print("struct __%s {" % regname)
- # TODO be more clever.. we should probably figure out which
- # fields have the same type in all variants (in which they
- # appear) and stuff everything else in a variant specific
- # sub-structure.
- seen_fields = []
- bit_size = 32
- array = False
- address = None
- for variant in variants.keys():
- print(" /* %s fields: */" % variant)
- reg = variants[variant]
- bit_size = reg.bit_size
- array = reg.array
- for f in reg.bitset.fields:
- fld_name = field_name(reg, f)
- if fld_name in seen_fields:
- continue
- seen_fields.append(fld_name)
- name = fld_name.lower()
- if f.type in [ "address", "waddress" ]:
- if address:
- continue
- address = f
- tab_to(" __bo_type", "bo;")
- tab_to(" uint32_t", "bo_offset;")
- continue
- type, val = f.ctype("var")
- tab_to(" %s" %type, "%s;" %name)
- print(" /* fallback fields: */")
- if bit_size == 64:
- tab_to(" uint64_t", "unknown;")
- tab_to(" uint64_t", "qword;")
- else:
- tab_to(" uint32_t", "unknown;")
- tab_to(" uint32_t", "dword;")
- print("};")
- # TODO don't hardcode the varset enum name
- varenum = "chip"
- print("template <%s %s>" % (varenum, varenum.upper()))
- print("static inline struct fd_reg_pair")
- xtra = ""
- xtravar = ""
- if array:
- xtra = "int __i, "
- xtravar = "__i, "
- print("__%s(%sstruct __%s fields) {" % (regname, xtra, regname))
- for variant in variants.keys():
- if "-" in variant:
- start = variant[:variant.index("-")]
- end = variant[variant.index("-") + 1:]
- if end != "":
- print(" if ((%s >= %s) && (%s <= %s)) {" % (varenum.upper(), start, varenum.upper(), end))
- else:
- print(" if (%s >= %s) {" % (varenum.upper(), start))
- else:
- print(" if (%s == %s) {" % (varenum.upper(), variant))
- reg = variants[variant]
- reg.dump_regpair_builder()
- print(" } else")
- print(" assert(!\"invalid variant\");")
- print(" return (struct fd_reg_pair){};")
- print("}")
-
- if bit_size == 64:
- skip = ", { .reg = 0 }"
- else:
- skip = ""
-
- print("#define %s(VARIANT, %s...) __%s<VARIANT>(%s{__VA_ARGS__})%s" % (regname, xtravar, regname, xtravar, skip))
- print("#endif /* __cplusplus */")
-
- def dump_structs(self):
- for e in self.file:
- e.dump_pack_struct(self.has_variants(e))
-
- for regname in self.variant_regs:
- self.dump_reg_variants(regname, self.variant_regs[regname])
+ def __init__(self):
+ self.current_array = None
+ self.current_domain = None
+ self.current_prefix = None
+ self.current_prefix_type = None
+ self.current_stripe = None
+ self.current_bitset = None
+ self.current_bitsize = 32
+ # The varset attribute on the domain specifies the enum which
+ # specifies all possible hw variants:
+ self.current_varset = None
+ # Regs that have multiple variants.. we only generated the C++
+ # template based struct-packers for these
+ self.variant_regs = {}
+ # Information in which contexts regs are used, to be used in
+ # debug options
+ self.usage_regs = collections.defaultdict(list)
+ self.bitsets = {}
+ self.enums = {}
+ self.variants = set()
+ self.file = []
+ self.xml_files = []
+
+ def error(self, message):
+ parser, filename = self.stack[-1]
+ return Error("%s:%d:%d: %s" % (filename, parser.CurrentLineNumber, parser.CurrentColumnNumber, message))
+
+ def prefix(self, variant=None):
+ if self.current_prefix_type == "variant" and variant:
+ return sanitize_variant(variant)
+ elif self.current_stripe:
+ return self.current_stripe + "_" + self.current_domain
+ elif self.current_prefix:
+ return self.current_prefix + "_" + self.current_domain
+ else:
+ return self.current_domain
+
+ def parse_field(self, name, attrs):
+ try:
+ if "pos" in attrs:
+ high = low = int(attrs["pos"], 0)
+ elif "high" in attrs and "low" in attrs:
+ high = int(attrs["high"], 0)
+ low = int(attrs["low"], 0)
+ else:
+ low = 0
+ high = self.current_bitsize - 1
+
+ if "type" in attrs:
+ type = attrs["type"]
+ else:
+ type = None
+
+ if "shr" in attrs:
+ shr = int(attrs["shr"], 0)
+ else:
+ shr = 0
+
+ b = Field(name, low, high, shr, type, self)
+
+ if type == "fixed" or type == "ufixed":
+ b.radix = int(attrs["radix"], 0)
+
+ self.current_bitset.fields.append(b)
+ except ValueError as e:
+ raise self.error(e)
+
+ def parse_varset(self, attrs):
+ # Inherit the varset from the enclosing domain if not overriden:
+ varset = self.current_varset
+ if "varset" in attrs:
+ varset = self.enums[attrs["varset"]]
+ return varset
+
+ def parse_variants(self, attrs):
+ if "variants" not in attrs:
+ return None
+
+ variant = attrs["variants"].split(",")[0]
+ varset = self.parse_varset(attrs)
+
+ if "-" in variant:
+ # if we have a range, validate that both the start and end
+ # of the range are valid enums:
+ start = variant[:variant.index("-")]
+ end = variant[variant.index("-") + 1:]
+ assert varset.has_name(start)
+ if end != "":
+ assert varset.has_name(end)
+ else:
+ assert varset.has_name(variant)
+
+ return variant
+
+ def add_all_variants(self, reg, attrs, parent_variant):
+ # TODO this should really handle *all* variants, including dealing
+ # with open ended ranges (ie. "A2XX,A4XX-") (we have the varset
+ # enum now to make that possible)
+ variant = self.parse_variants(attrs)
+ if not variant:
+ variant = parent_variant
+
+ if reg.name not in self.variant_regs:
+ self.variant_regs[reg.name] = {}
+ else:
+ # All variants must be same size:
+ v = next(iter(self.variant_regs[reg.name]))
+ assert self.variant_regs[reg.name][v].bit_size == reg.bit_size
+
+ self.variant_regs[reg.name][variant] = reg
+
+ def add_all_usages(self, reg, usages):
+ if not usages:
+ return
+
+ for usage in usages:
+ self.usage_regs[usage].append(reg)
+
+ self.variants.add(reg.domain)
+
+ def do_validate(self, schemafile):
+ if not self.validate:
+ return
+
+ try:
+ from lxml import etree
+
+ parser, filename = self.stack[-1]
+ dirname = os.path.dirname(filename)
+
+ # we expect this to look like <namespace url> schema.xsd.. I think
+ # technically it is supposed to be just a URL, but that doesn't
+ # quite match up to what we do.. Just skip over everything up to
+ # and including the first whitespace character:
+ schemafile = schemafile[schemafile.rindex(" ")+1:]
+
+ # this is a bit cheezy, but the xml file to validate could be
+ # in a child director, ie. we don't really know where the schema
+ # file is, the way the rnn C code does. So if it doesn't exist
+ # just look one level up
+ if not os.path.exists(dirname + "/" + schemafile):
+ schemafile = "../" + schemafile
+
+ if not os.path.exists(dirname + "/" + schemafile):
+ raise self.error("Cannot find schema for: " + filename)
+
+ xmlschema_doc = etree.parse(dirname + "/" + schemafile)
+ xmlschema = etree.XMLSchema(xmlschema_doc)
+
+ xml_doc = etree.parse(filename)
+ if not xmlschema.validate(xml_doc):
+ error_str = str(xmlschema.error_log.filter_from_errors()[0])
+ raise self.error(
+ "Schema validation failed for: " + filename + "\n" + error_str)
+ except ImportError as e:
+ print("lxml not found, skipping validation", file=sys.stderr)
+
+ def do_parse(self, filename):
+ filepath = os.path.abspath(filename)
+ if filepath in self.xml_files:
+ return
+ self.xml_files.append(filepath)
+ file = open(filename, "rb")
+ parser = xml.parsers.expat.ParserCreate()
+ self.stack.append((parser, filename))
+ parser.StartElementHandler = self.start_element
+ parser.EndElementHandler = self.end_element
+ parser.CharacterDataHandler = self.character_data
+ parser.buffer_text = True
+ parser.ParseFile(file)
+ self.stack.pop()
+ file.close()
+
+ def parse(self, rnn_path, filename, validate):
+ self.path = rnn_path
+ self.stack = []
+ self.validate = validate
+ self.do_parse(filename)
+
+ def parse_reg(self, attrs, bit_size):
+ self.current_bitsize = bit_size
+ if "type" in attrs and attrs["type"] in self.bitsets:
+ bitset = self.bitsets[attrs["type"]]
+ if bitset.inline:
+ self.current_bitset = Bitset(attrs["name"], bitset)
+ self.current_bitset.inline = True
+ else:
+ self.current_bitset = bitset
+ else:
+ self.current_bitset = Bitset(attrs["name"], None)
+ self.current_bitset.inline = True
+ if "type" in attrs:
+ self.parse_field(None, attrs)
+
+ variant = self.parse_variants(attrs)
+ if not variant and self.current_array:
+ variant = self.current_array.variant
+
+ self.current_reg = Reg(attrs, self.prefix(
+ variant), self.current_array, bit_size)
+ self.current_reg.bitset = self.current_bitset
+ self.current_bitset.reg = self.current_reg
+
+ if len(self.stack) == 1:
+ self.file.append(self.current_reg)
+
+ if variant is not None:
+ self.add_all_variants(self.current_reg, attrs, variant)
+
+ usages = None
+ if "usage" in attrs:
+ usages = attrs["usage"].split(',')
+ elif self.current_array:
+ usages = self.current_array.usages
+
+ self.add_all_usages(self.current_reg, usages)
+
+ def start_element(self, name, attrs):
+ self.cdata = ""
+ if name == "import":
+ filename = attrs["file"]
+ self.do_parse(os.path.join(self.path, filename))
+ elif name == "domain":
+ self.current_domain = attrs["name"]
+ if "prefix" in attrs:
+ self.current_prefix = sanitize_variant(
+ self.parse_variants(attrs))
+ self.current_prefix_type = attrs["prefix"]
+ else:
+ self.current_prefix = None
+ self.current_prefix_type = None
+ if "varset" in attrs:
+ self.current_varset = self.enums[attrs["varset"]]
+ elif name == "stripe":
+ self.current_stripe = sanitize_variant(self.parse_variants(attrs))
+ elif name == "enum":
+ self.current_enum_value = 0
+ self.current_enum = Enum(attrs["name"])
+ self.enums[attrs["name"]] = self.current_enum
+ if len(self.stack) == 1:
+ self.file.append(self.current_enum)
+ elif name == "value":
+ if "value" in attrs:
+ value = int(attrs["value"], 0)
+ else:
+ value = self.current_enum_value
+ self.current_enum.values.append((attrs["name"], value))
+ elif name == "reg32":
+ self.parse_reg(attrs, 32)
+ elif name == "reg64":
+ self.parse_reg(attrs, 64)
+ elif name == "array":
+ self.current_bitsize = 32
+ variant = self.parse_variants(attrs)
+ index_type = self.enums[attrs["index"]
+ ] if "index" in attrs else None
+ self.current_array = Array(attrs, self.prefix(
+ variant), variant, self.current_array, index_type)
+ if len(self.stack) == 1:
+ self.file.append(self.current_array)
+ elif name == "bitset":
+ self.current_bitset = Bitset(attrs["name"], None)
+ if "inline" in attrs and attrs["inline"] == "yes":
+ self.current_bitset.inline = True
+ self.bitsets[self.current_bitset.name] = self.current_bitset
+ if len(self.stack) == 1 and not self.current_bitset.inline:
+ self.file.append(self.current_bitset)
+ elif name == "bitfield" and self.current_bitset:
+ self.parse_field(attrs["name"], attrs)
+ elif name == "database":
+ self.do_validate(attrs["xsi:schemaLocation"])
+
+ def end_element(self, name):
+ if name == "domain":
+ self.current_domain = None
+ self.current_prefix = None
+ self.current_prefix_type = None
+ elif name == "stripe":
+ self.current_stripe = None
+ elif name == "bitset":
+ self.current_bitset = None
+ elif name == "reg32":
+ self.current_reg = None
+ elif name == "array":
+ # if the array has no Reg children, push an implicit reg32:
+ if len(self.current_array.children) == 0:
+ attrs = {
+ "name": "REG",
+ "offset": "0",
+ }
+ self.parse_reg(attrs, 32)
+ self.current_array = self.current_array.parent
+ elif name == "enum":
+ self.current_enum = None
+
+ def character_data(self, data):
+ self.cdata += data
+
+ def dump_reg_usages(self):
+ d = collections.defaultdict(list)
+ for usage, regs in self.usage_regs.items():
+ for reg in regs:
+ variants = self.variant_regs.get(reg.name)
+ if variants:
+ for variant, vreg in variants.items():
+ if reg == vreg:
+ d[(usage, sanitize_variant(variant))].append(reg)
+ else:
+ for variant in self.variants:
+ d[(usage, sanitize_variant(variant))].append(reg)
+
+ print("#ifdef __cplusplus")
+
+ for usage, regs in self.usage_regs.items():
+ print("template<chip CHIP> constexpr inline uint16_t %s_REGS[] = {};" % (
+ usage.upper()))
+
+ for (usage, variant), regs in d.items():
+ offsets = []
+
+ for reg in regs:
+ if reg.array:
+ for i in range(reg.array.length):
+ offsets.append(reg.array.offset +
+ reg.offset + i * reg.array.stride)
+ if reg.bit_size == 64:
+ offsets.append(offsets[-1] + 1)
+ else:
+ offsets.append(reg.offset)
+ if reg.bit_size == 64:
+ offsets.append(offsets[-1] + 1)
+
+ offsets.sort()
+
+ print("template<> constexpr inline uint16_t %s_REGS<%s>[] = {" % (
+ usage.upper(), variant))
+ for offset in offsets:
+ print("\t%s," % hex(offset))
+ print("};")
+
+ print("#endif")
+
+ def has_variants(self, reg):
+ return reg.name in self.variant_regs and not is_number(reg.name) and not is_number(reg.name[1:])
+
+ def dump(self):
+ enums = []
+ bitsets = []
+ regs = []
+ for e in self.file:
+ if isinstance(e, Enum):
+ enums.append(e)
+ elif isinstance(e, Bitset):
+ bitsets.append(e)
+ else:
+ regs.append(e)
+
+ for e in enums + bitsets + regs:
+ e.dump(self.has_variants(e))
+
+ self.dump_reg_usages()
+
+ def dump_regs_py(self):
+ regs = []
+ for e in self.file:
+ if isinstance(e, Reg):
+ regs.append(e)
+
+ for e in regs:
+ e.dump_py()
+
+ def dump_reg_variants(self, regname, variants):
+ if is_number(regname) or is_number(regname[1:]):
+ return
+ print("#ifdef __cplusplus")
+ print("struct __%s {" % regname)
+ # TODO be more clever.. we should probably figure out which
+ # fields have the same type in all variants (in which they
+ # appear) and stuff everything else in a variant specific
+ # sub-structure.
+ seen_fields = []
+ bit_size = 32
+ array = False
+ address = None
+ constexpr_mark = " CONSTEXPR"
+ for variant in variants.keys():
+ print(" /* %s fields: */" % variant)
+ reg = variants[variant]
+ bit_size = reg.bit_size
+ array = reg.array
+ for f in reg.bitset.fields:
+ fld_name = field_name(reg, f)
+ if fld_name in seen_fields:
+ continue
+ seen_fields.append(fld_name)
+ name = fld_name.lower()
+ if f.type in ["address", "waddress"]:
+ if address:
+ continue
+ address = f
+ print("#ifndef TU_CS_H")
+ tab_to(" __bo_type", "bo;")
+ tab_to(" uint32_t", "bo_offset;")
+ print("#endif")
+ continue
+ type, val = f.ctype("var")
+ tab_to(" %s" % type, "%s;" % name)
+ if f.type == "float":
+ constexpr_mark = ""
+ print(" /* fallback fields: */")
+ if bit_size == 64:
+ tab_to(" uint64_t", "unknown;")
+ tab_to(" uint64_t", "qword;")
+ else:
+ tab_to(" uint32_t", "unknown;")
+ tab_to(" uint32_t", "dword;")
+ print("};")
+ # TODO don't hardcode the varset enum name
+ varenum = "chip"
+ print("template <%s %s>" % (varenum, varenum.upper()))
+ print("static%s inline struct fd_reg_pair" % (constexpr_mark))
+ xtra = ""
+ xtravar = ""
+ if array:
+ xtra = "int __i, "
+ xtravar = "__i, "
+ print("__%s(%sstruct __%s fields) {" % (regname, xtra, regname))
+ for variant in variants.keys():
+ if "-" in variant:
+ start = variant[:variant.index("-")]
+ end = variant[variant.index("-") + 1:]
+ if end != "":
+ print(" if ((%s >= %s) && (%s <= %s)) {" % (
+ varenum.upper(), start, varenum.upper(), end))
+ else:
+ print(" if (%s >= %s) {" % (varenum.upper(), start))
+ else:
+ print(" if (%s == %s) {" % (varenum.upper(), variant))
+ reg = variants[variant]
+ reg.dump_regpair_builder()
+ print(" } else")
+ print(" assert(!\"invalid variant\");")
+ print(" return (struct fd_reg_pair){};")
+ print("}")
+
+ if bit_size == 64:
+ skip = ", { .reg = 0 }"
+ else:
+ skip = ""
+
+ print("#define %s(VARIANT, %s...) __%s<VARIANT>(%s{__VA_ARGS__})%s" % (
+ regname, xtravar, regname, xtravar, skip))
+ print("#endif /* __cplusplus */")
+
+ def dump_structs(self):
+ for e in self.file:
+ e.dump_pack_struct(self.has_variants(e))
+
+ for regname in self.variant_regs:
+ self.dump_reg_variants(regname, self.variant_regs[regname])
def dump_c(args, guard, func):
- p = Parser()
-
- try:
- p.parse(args.rnn, args.xml, args.validate)
- except Error as e:
- print(e, file=sys.stderr)
- exit(1)
-
- print("#ifndef %s\n#define %s\n" % (guard, guard))
-
- print("/* Autogenerated file, DO NOT EDIT manually! */")
-
- print()
- print("#ifdef __KERNEL__")
- print("#include <linux/bug.h>")
- print("#define assert(x) BUG_ON(!(x))")
- print("#else")
- print("#include <assert.h>")
- print("#endif")
- print()
-
- print("#ifdef __cplusplus")
- print("#define __struct_cast(X)")
- print("#else")
- print("#define __struct_cast(X) (struct X)")
- print("#endif")
- print()
-
- print("#ifndef FD_NO_DEPRECATED_PACK")
- print("#define FD_DEPRECATED __attribute__((deprecated))")
- print("#else")
- print("#define FD_DEPRECATED")
- print("#endif")
- print()
-
- func(p)
-
- print()
- print("#undef FD_DEPRECATED")
- print()
-
- print("#endif /* %s */" % guard)
+ p = Parser()
+
+ try:
+ p.parse(args.rnn, args.xml, args.validate)
+ except Error as e:
+ print(e, file=sys.stderr)
+ exit(1)
+
+ print("#ifndef %s\n#define %s\n" % (guard, guard))
+
+ print("/* Autogenerated file, DO NOT EDIT manually! */")
+
+ print()
+ print("#ifdef __KERNEL__")
+ print("#include <linux/bug.h>")
+ print("#define assert(x) BUG_ON(!(x))")
+ print("#else")
+ print("#include <assert.h>")
+ print("#endif")
+ print()
+
+ print("#ifdef __cplusplus")
+ print("#define __struct_cast(X)")
+ print("#define CONSTEXPR constexpr")
+ print("#else")
+ print("#define __struct_cast(X) (struct X)")
+ print("#define CONSTEXPR")
+ print("#endif")
+ print()
+
+ # TODO figure out what to do about fd_reg_stomp_allowed()
+ # vs gcc.. for now only enable the warnings with clang:
+ print("#if defined(__clang__) && !defined(FD_NO_DEPRECATED_PACK)")
+ print("#define __FD_DEPRECATED _Pragma (\"GCC warning \\\"Deprecated reg builder\\\"\")")
+ print("#else")
+ print("#define __FD_DEPRECATED")
+ print("#endif")
+ print()
+
+ func(p)
+
+ print("#endif /* %s */" % guard)
def dump_c_defines(args):
- guard = str.replace(os.path.basename(args.xml), '.', '_').upper()
- dump_c(args, guard, lambda p: p.dump())
+ guard = str.replace(os.path.basename(args.xml), '.', '_').upper()
+ dump_c(args, guard, lambda p: p.dump())
def dump_c_pack_structs(args):
- guard = str.replace(os.path.basename(args.xml), '.', '_').upper() + '_STRUCTS'
- dump_c(args, guard, lambda p: p.dump_structs())
-
+ guard = str.replace(os.path.basename(args.xml),
+ '.', '_').upper() + '_STRUCTS'
+ dump_c(args, guard, lambda p: p.dump_structs())
+
+
+def dump_perfcntrs(args):
+ p = Parser()
+
+ try:
+ p.parse(args.rnn, args.xml, args.validate)
+ except Error as e:
+ print(e, file=sys.stderr)
+ exit(1)
+
+ perfcntrs = json.load(open(args.json, "r", encoding="utf-8"))
+
+ chip_type = p.enums['chip']
+ chip = perfcntrs['chip']
+ if not chip_type.has_name(chip):
+ raise Error("Invalid chip: " + chip)
+
+ groups = perfcntrs['groups']
+
+ guard = "__" + chip + "_PERFCNTRS_"
+ print("#ifndef %s\n#define %s\n" % (guard, guard))
+ print("/* Autogenerated file, DO NOT EDIT manually! */")
+ print()
+ print("#ifdef __KERNEL__")
+ print("#include \"msm_perfcntr.h\"")
+ print("#endif")
+ print()
+
+ def has_variant(variant):
+ if variant is None:
+ return True
+ if "-" in variant:
+ start = chip_type.value(variant[:variant.index("-")])
+ end = chip_type.value(variant[variant.index("-") + 1:])
+ chipn = chip_type.value(chip)
+
+ return (start is None or chipn >= start) and (end is None or chipn <= end)
+ return chip == variant
+
+ # Split out arrays and regs for later access:
+ arrays = {}
+ regs = {}
+ for e in p.file:
+ if isinstance(e, Array) and has_variant(e.variant):
+ arrays[e.local_name] = e
+ if isinstance(e, Reg):
+ regs[e.name] = e
+
+ # For variant regs, overwrite 'regs' entries with correct variant:
+ for regname in p.variant_regs:
+ for (variant, reg) in p.variant_regs[regname].items():
+ if has_variant(variant):
+ regs[regname] = reg
+ break
+
+ for group in groups:
+ name = group['name']
+ name_low = name.lower()
+ num = group['num']
+ countable_type_name = group['countable_type']
+
+ if not countable_type_name in p.enums:
+ raise Error("Invalid type: " + countable_type_name)
+
+ countable_type = p.enums[countable_type_name]
+
+ print("#ifndef __KERNEL__")
+ print("static const struct fd_perfcntr_countable " + name_low + "_countables[] = {")
+ for (name, value) in countable_type.values:
+ # if the countable is prefixed with the chip, strip that:
+ # (note: avoid py3.9 dependency for kernel)
+ if name.startswith(chip + "_"):
+ name = name[len(chip)+1:]
+ print(" { \"" + name + "\", " + str(value) + " },")
+ print("};")
+ print("#endif")
+
+ print("static const struct fd_perfcntr_counter " + name_low + "_counters[] = {")
+ for i in range(0, num):
+ if "reserved" in group and i in group["reserved"]:
+ continue
+ def get_reg(name):
+ # if reg has {} pattern, expand that first:
+ name = name.format(i)
+
+ if name in arrays:
+ arr = arrays[name]
+ return arr.offset + (i * arr.stride)
+
+ if not name in regs:
+ raise Error("Invalid reg: " + name)
+
+ reg = regs[name]
+ return reg.offset
+
+ def get_counter():
+ # if the counter is <reg64> just a single "counter" value
+ # should be specified in the json, but for legacy separate
+ # hi/lo <reg32> pairs "counter_lo" and "counter_hi" should
+ # be specified
+ if "counter" in group:
+ counter = get_reg(group["counter"])
+ return [counter, counter+1]
+ counter_lo = get_reg(group["counter_lo"])
+ counter_hi = get_reg(group["counter_hi"])
+ return [counter_lo, counter_hi]
+
+ (counter_lo, counter_hi) = get_counter()
+ select = get_reg(group['select'])
+
+ select_offset = 0
+ if "select_offset" in group:
+ select_offset = int(group["select_offset"])
+ select = select + select_offset
+
+ slice_select_str = ""
+ if "slice_select" in group:
+ slice_select = group["slice_select"]
+ for reg in slice_select:
+ val = get_reg(reg) + select_offset
+ slice_select_str += "0x%04x, " % val
+
+ # TODO add support for things that need enable/clear regs
+
+ print(" { 0x%04x, {%s}, 0x%04x, 0x%04x }," % (select, slice_select_str, counter_lo, counter_hi))
+ print("};")
+
+ print()
+
+ print("const struct fd_perfcntr_group " + chip.lower() + "_perfcntr_groups[] = {")
+ for group in groups:
+ name = group['name']
+ name_low = name.lower()
+ pipe = 'NONE'
+ if 'pipe' in group:
+ pipe = group['pipe']
+
+ print(" GROUP(\"%s\", PIPE_%s, %s_counters, %s_countables)," % (name, pipe, name_low, name_low))
+
+ print("};")
+ print("const unsigned " + chip.lower() + "_num_perfcntr_groups = ARRAY_SIZE(" + chip.lower() + "_perfcntr_groups);")
+
+ print()
+ print("#endif /* %s */" % guard)
def dump_py_defines(args):
- p = Parser()
+ p = Parser()
- try:
- p.parse(args.rnn, args.xml, args.validate)
- except Error as e:
- print(e, file=sys.stderr)
- exit(1)
+ try:
+ p.parse(args.rnn, args.xml, args.validate)
+ except Error as e:
+ print(e, file=sys.stderr)
+ exit(1)
- file_name = os.path.splitext(os.path.basename(args.xml))[0]
+ file_name = os.path.splitext(os.path.basename(args.xml))[0]
- print("from enum import IntEnum")
- print("class %sRegs(IntEnum):" % file_name.upper())
+ print("from enum import IntEnum")
+ print("class %sRegs(IntEnum):" % file_name.upper())
- os.path.basename(args.xml)
+ os.path.basename(args.xml)
- p.dump_regs_py()
+ p.dump_regs_py()
def main():
- parser = argparse.ArgumentParser()
- parser.add_argument('--rnn', type=str, required=True)
- parser.add_argument('--xml', type=str, required=True)
- parser.add_argument('--validate', default=False, action='store_true')
- parser.add_argument('--no-validate', dest='validate', action='store_false')
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--rnn', type=str, required=True)
+ parser.add_argument('--xml', type=str, required=True)
+ parser.add_argument('--validate', default=False, action='store_true')
+ parser.add_argument('--no-validate', dest='validate', action='store_false')
+
+ subparsers = parser.add_subparsers()
+ subparsers.required = True
- subparsers = parser.add_subparsers()
- subparsers.required = True
+ parser_c_defines = subparsers.add_parser('c-defines')
+ parser_c_defines.set_defaults(func=dump_c_defines)
- parser_c_defines = subparsers.add_parser('c-defines')
- parser_c_defines.set_defaults(func=dump_c_defines)
+ parser_c_pack_structs = subparsers.add_parser('c-pack-structs')
+ parser_c_pack_structs.set_defaults(func=dump_c_pack_structs)
- parser_c_pack_structs = subparsers.add_parser('c-pack-structs')
- parser_c_pack_structs.set_defaults(func=dump_c_pack_structs)
+ parser_perfcntrs = subparsers.add_parser('perfcntrs')
+ parser_perfcntrs.add_argument('--json', type=str, required=True)
+ parser_perfcntrs.set_defaults(func=dump_perfcntrs)
- parser_py_defines = subparsers.add_parser('py-defines')
- parser_py_defines.set_defaults(func=dump_py_defines)
+ parser_py_defines = subparsers.add_parser('py-defines')
+ parser_py_defines.set_defaults(func=dump_py_defines)
- args = parser.parse_args()
- args.func(args)
+ args = parser.parse_args()
+ args.func(args)
if __name__ == '__main__':
- main()
+ main()
--
2.54.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-05-06 17:11 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-06 17:10 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
2026-05-06 17:10 ` [PATCH v4 04/16] drm/msm/registers: Sync gen_header.py from mesa Rob Clark
-- strict thread matches above, loose matches on Subject: below --
2026-05-04 19:06 [PATCH v3 00/16] drm/msm: Add PERFCNTR_CONFIG ioctl Rob Clark
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox