* [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU
@ 2024-12-17 14:51 Neil Armstrong
2024-12-17 14:51 ` [PATCH v6 1/7] drm/msm: adreno: add defines for gpu & gmu frequency table sizes Neil Armstrong
` (7 more replies)
0 siblings, 8 replies; 15+ messages in thread
From: Neil Armstrong @ 2024-12-17 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Simona Vetter,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree,
Neil Armstrong
The Adreno GPU Management Unit (GMU) can also vote for DDR Bandwidth
along the Frequency and Power Domain level, but by default we leave the
OPP core scale the interconnect ddr path.
While scaling the interconnect path was sufficient, newer GPUs
like the A750 requires specific vote parameters and bandwidth to
achieve full functionnality.
In order to get the vote values to be used by the GPU Management
Unit (GMU), we need to parse all the possible OPP Bandwidths and
create a vote value to be send to the appropriate Bus Control
Modules (BCMs) declared in the GPU info struct.
The added dev_pm_opp_get_bw() is used in this case.
The vote array will then be used to dynamically generate the GMU
bw_table sent during the GMU power-up.
Those entries will then be used by passing the appropriate
bandwidth level when voting for a GPU frequency.
This will make sure all resources are equally voted for a
same OPP, whatever decision is done by the GMU, it will
ensure all resources votes are synchronized.
Depends on [1] to avoid crashing when getting OPP bandwidths.
[1] https://lore.kernel.org/all/20241203-topic-opp-fix-assert-index-check-v3-0-1d4f6f763138@linaro.org/
Ran full vulkan-cts-1.3.7.3-0-gd71a36db16d98313c431829432a136dbda692a08 with mesa 25.0.0+git3ecf2a0518 on:
- QRD8550
- QRD8650
- HDK8650
Any feedback is welcome.
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
---
Changes in v6:
- Account for A6xx in a6xx_gmu_rpmh_bw_votes_init():
- always vote the perfmode bit on a6xx
- only vote X & Y on A7xx
- Only AB vote starting from A750
- Cleanup a6xx_gmu_rpmh_bw_votes_init()
- drop useless tests
- add local const struct a6xx_bcm to avoid &info->bcms[bcm_index]
- remove useless ULL to 1000ULL
- add an error if cmd_db_read_aux_data() returns count==0
- Link to v5: https://lore.kernel.org/r/20241211-topic-sm8x50-gpu-bw-vote-v5-0-6112f9f785ec@linaro.org
Changes in v5:
- Dropped bogus qcom,icc.h flags
- Properly calculate _wait_bitmask from votes
- Switch DT to qcom,bus-freq values from downstream
- Added review tags
- Link to v4: https://lore.kernel.org/r/20241205-topic-sm8x50-gpu-bw-vote-v4-0-9650d15dd435@linaro.org
Changes in v4:
- Collected review tags
- Dropped bcm_div() and switched to clamp() instead
- Dropped pre-calculation of AB votes
- Instead calculate a 25% floor vote in a6xx_gmu_set_freq() as recommended
- Use QCOM_ICC_TAG_ALWAYS in DT
- Made a740_generate_bw_table() generic, using defines to fill the table
- Link to v3: https://lore.kernel.org/r/20241128-topic-sm8x50-gpu-bw-vote-v3-0-81d60c10fb73@linaro.org
Changes in v3:
- I didn't take Dmitry's review tags since I significantly changed the patches
- Dropped applied OPP change
- Dropped QUIRK/FEATURE addition/rename in favor of checking the a6xx_info->bcms pointer
- Switch a6xx_info->bcms to a pointer, so it can be easy to share the table
- Generate AB votes in advance, the voting was wrong in v2 we need to quantitiwe each bandwidth value
- Do not vote via GMU is there's only the OFF vote because DT doesn't have the right properties
- Added defines for the a6xx_gmu freqs tables to not have magic 16 and 4 values
- Renamed gpu_bw_votes to gpu_ib_votes to match the downstream naming
- Changed the parameters of a6xx_hfi_set_freq() to u32 to match the data type we pass
- Drop "request for maximum bus bandwidth usage" and merge it in previous changes
- Link to v2: https://lore.kernel.org/r/20241119-topic-sm8x50-gpu-bw-vote-v2-0-4deb87be2498@linaro.org
Changes in v2:
- opp: rename to dev_pm_opp_get_bw, fix commit message and kerneldoc
- remove quirks that are features and move them to a dedicated .features bitfield
- get icc bcm kerneldoc, and simplify/cleanup a6xx_gmu_rpmh_bw_votes_init()
- no more copies of data
- take calculations from icc-rpmh/bcm-voter
- move into a single cleaner function
- fix a6xx_gmu_set_freq() but not calling dev_pm_opp_set_opp() if !bw_index
- also vote for maximum bus bandwidth usage (AB)
- overall fix typos in commit messages
- Link to v1: https://lore.kernel.org/r/20241113-topic-sm8x50-gpu-bw-vote-v1-0-3b8d39737a9b@linaro.org
---
Neil Armstrong (7):
drm/msm: adreno: add defines for gpu & gmu frequency table sizes
drm/msm: adreno: add plumbing to generate bandwidth vote table for GMU
drm/msm: adreno: dynamically generate GMU bw table
drm/msm: adreno: find bandwidth index of OPP and set it along freq index
drm/msm: adreno: enable GMU bandwidth for A740 and A750
arm64: qcom: dts: sm8550: add interconnect and opp-peak-kBps for GPU
arm64: qcom: dts: sm8650: add interconnect and opp-peak-kBps for GPU
arch/arm64/boot/dts/qcom/sm8550.dtsi | 13 +++
arch/arm64/boot/dts/qcom/sm8650.dtsi | 15 +++
drivers/gpu/drm/msm/adreno/a6xx_catalog.c | 22 ++++
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 186 +++++++++++++++++++++++++++++-
drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 26 ++++-
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 1 +
drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 54 ++++++++-
drivers/gpu/drm/msm/adreno/a6xx_hfi.h | 5 +
drivers/gpu/drm/msm/adreno/adreno_gpu.h | 5 +
9 files changed, 316 insertions(+), 11 deletions(-)
---
base-commit: 4176cf5c5651c33769de83bb61b0287f4ec7719f
change-id: 20241113-topic-sm8x50-gpu-bw-vote-f5e022fe7a47
Best regards,
--
Neil Armstrong <neil.armstrong@linaro.org>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v6 1/7] drm/msm: adreno: add defines for gpu & gmu frequency table sizes
2024-12-17 14:51 [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Neil Armstrong
@ 2024-12-17 14:51 ` Neil Armstrong
2024-12-17 14:51 ` [PATCH v6 2/7] drm/msm: adreno: add plumbing to generate bandwidth vote table for GMU Neil Armstrong
` (6 subsequent siblings)
7 siblings, 0 replies; 15+ messages in thread
From: Neil Armstrong @ 2024-12-17 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Simona Vetter,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree,
Neil Armstrong
Even if the code uses ARRAY_SIZE() to fill those tables,
it's still a best practice to not use magic values for
tables in structs.
Suggested-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
---
drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
index b4a79f88ccf45cfe651c86d2a9da39541c5772b3..88f18ea6a38a08b5b171709e5020010947a5d347 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
@@ -19,6 +19,9 @@ struct a6xx_gmu_bo {
u64 iova;
};
+#define GMU_MAX_GX_FREQS 16
+#define GMU_MAX_CX_FREQS 4
+
/*
* These define the different GMU wake up options - these define how both the
* CPU and the GMU bring up the hardware
@@ -79,12 +82,12 @@ struct a6xx_gmu {
int current_perf_index;
int nr_gpu_freqs;
- unsigned long gpu_freqs[16];
- u32 gx_arc_votes[16];
+ unsigned long gpu_freqs[GMU_MAX_GX_FREQS];
+ u32 gx_arc_votes[GMU_MAX_GX_FREQS];
int nr_gmu_freqs;
- unsigned long gmu_freqs[4];
- u32 cx_arc_votes[4];
+ unsigned long gmu_freqs[GMU_MAX_CX_FREQS];
+ u32 cx_arc_votes[GMU_MAX_CX_FREQS];
unsigned long freq;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v6 2/7] drm/msm: adreno: add plumbing to generate bandwidth vote table for GMU
2024-12-17 14:51 [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Neil Armstrong
2024-12-17 14:51 ` [PATCH v6 1/7] drm/msm: adreno: add defines for gpu & gmu frequency table sizes Neil Armstrong
@ 2024-12-17 14:51 ` Neil Armstrong
2024-12-23 14:54 ` Konrad Dybcio
2024-12-17 14:51 ` [PATCH v6 3/7] drm/msm: adreno: dynamically generate GMU bw table Neil Armstrong
` (5 subsequent siblings)
7 siblings, 1 reply; 15+ messages in thread
From: Neil Armstrong @ 2024-12-17 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Simona Vetter,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree,
Neil Armstrong
The Adreno GPU Management Unit (GMU) can also scale DDR Bandwidth along
the Frequency and Power Domain level, but by default we leave the
OPP core scale the interconnect ddr path.
While scaling via the interconnect path was sufficient, newer GPUs
like the A750 requires specific vote paremeters and bandwidth to
achieve full functionality.
In order to calculate vote values used by the GPU Management
Unit (GMU), we need to parse all the possible OPP Bandwidths and
create a vote value to be sent to the appropriate Bus Control
Modules (BCMs) declared in the GPU info struct.
This vote value is called IB, while on the other side the GMU also
takes another vote called AB which is a 16bit quantized value
of the floor bandwidth against the maximum supported bandwidth.
The AB vote will be calculated later when setting the frequency.
The vote array will then be used to dynamically generate the GMU
bw_table sent during the GMU power-up.
Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
---
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 147 ++++++++++++++++++++++++++++++++++
drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 13 +++
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 1 +
3 files changed, 161 insertions(+)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 14db7376c712d19446b38152e480bd5a1e0a5198..b1dadafc35e95d6173019bda1105008dec1ac03a 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -9,6 +9,7 @@
#include <linux/pm_domain.h>
#include <linux/pm_opp.h>
#include <soc/qcom/cmd-db.h>
+#include <soc/qcom/tcs.h>
#include <drm/drm_gem.h>
#include "a6xx_gpu.h"
@@ -1287,6 +1288,104 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu)
return 0;
}
+/**
+ * struct bcm_db - Auxiliary data pertaining to each Bus Clock Manager (BCM)
+ * @unit: divisor used to convert bytes/sec bw value to an RPMh msg
+ * @width: multiplier used to convert bytes/sec bw value to an RPMh msg
+ * @vcd: virtual clock domain that this bcm belongs to
+ * @reserved: reserved field
+ */
+struct bcm_db {
+ __le32 unit;
+ __le16 width;
+ u8 vcd;
+ u8 reserved;
+};
+
+static int a6xx_gmu_rpmh_bw_votes_init(struct adreno_gpu *adreno_gpu,
+ const struct a6xx_info *info,
+ struct a6xx_gmu *gmu)
+{
+ const struct bcm_db *bcm_data[GMU_MAX_BCMS] = { 0 };
+ unsigned int bcm_index, bw_index, bcm_count = 0;
+
+ /* Retrieve BCM data from cmd-db */
+ for (bcm_index = 0; bcm_index < GMU_MAX_BCMS; bcm_index++) {
+ const struct a6xx_bcm *bcm = &info->bcms[bcm_index];
+ size_t count;
+
+ /* Stop at NULL terminated bcm entry */
+ if (!bcm->name)
+ break;
+
+ bcm_data[bcm_index] = cmd_db_read_aux_data(bcm->name, &count);
+ if (IS_ERR(bcm_data[bcm_index]))
+ return PTR_ERR(bcm_data[bcm_index]);
+
+ if (!count) {
+ dev_err(gmu->dev, "invalid BCM '%s' aux data size\n",
+ bcm->name);
+ return -EINVAL;
+ }
+
+ bcm_count++;
+ }
+
+ /* Generate BCM votes values for each bandwidth & BCM */
+ for (bw_index = 0; bw_index < gmu->nr_gpu_bws; bw_index++) {
+ u32 *data = gmu->gpu_ib_votes[bw_index];
+ u32 bw = gmu->gpu_bw_table[bw_index];
+
+ /* Calculations loosely copied from bcm_aggregate() & tcs_cmd_gen() */
+ for (bcm_index = 0; bcm_index < bcm_count; bcm_index++) {
+ const struct a6xx_bcm *bcm = &info->bcms[bcm_index];
+ bool commit = false;
+ u64 peak;
+ u32 vote;
+
+ if (bcm_index == bcm_count - 1 ||
+ (bcm_data[bcm_index + 1] &&
+ bcm_data[bcm_index]->vcd != bcm_data[bcm_index + 1]->vcd))
+ commit = true;
+
+ if (!bw) {
+ data[bcm_index] = BCM_TCS_CMD(commit, false, 0, 0);
+ continue;
+ }
+
+ if (bcm->fixed) {
+ u32 perfmode = 0;
+
+ /* GMU on A6xx votes perfmode on all valid bandwidth */
+ if (!adreno_is_a7xx(adreno_gpu) ||
+ (bcm->perfmode_bw && bw >= bcm->perfmode_bw))
+ perfmode = bcm->perfmode;
+
+ data[bcm_index] = BCM_TCS_CMD(commit, true, 0, perfmode);
+ continue;
+ }
+
+ /* Multiply the bandwidth by the width of the connection */
+ peak = (u64)bw * le16_to_cpu(bcm_data[bcm_index]->width);
+ do_div(peak, bcm->buswidth);
+
+ /* Input bandwidth value is in KBps, scale the value to BCM unit */
+ peak *= 1000;
+ do_div(peak, le32_to_cpu(bcm_data[bcm_index]->unit));
+
+ vote = clamp(peak, 1, BCM_TCS_CMD_VOTE_MASK);
+
+ /* GMUs on A7xx votes on both x & y */
+ if (adreno_is_a7xx(adreno_gpu))
+ data[bcm_index] = BCM_TCS_CMD(commit, true, vote, vote);
+ else
+ data[bcm_index] = BCM_TCS_CMD(commit, true, 0, vote);
+ }
+ }
+
+ return 0;
+}
+
/* Return the 'arc-level' for the given frequency */
static unsigned int a6xx_gmu_get_arc_level(struct device *dev,
unsigned long freq)
@@ -1390,12 +1489,15 @@ static int a6xx_gmu_rpmh_arc_votes_init(struct device *dev, u32 *votes,
* The GMU votes with the RPMh for itself and on behalf of the GPU but we need
* to construct the list of votes on the CPU and send it over. Query the RPMh
* voltage levels and build the votes
+ * The GMU can also vote for DDR interconnects, use the OPP bandwidth entries
+ * and BCM parameters to build the votes.
*/
static int a6xx_gmu_rpmh_votes_init(struct a6xx_gmu *gmu)
{
struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
+ const struct a6xx_info *info = adreno_gpu->info->a6xx;
struct msm_gpu *gpu = &adreno_gpu->base;
int ret;
@@ -1407,6 +1509,10 @@ static int a6xx_gmu_rpmh_votes_init(struct a6xx_gmu *gmu)
ret |= a6xx_gmu_rpmh_arc_votes_init(gmu->dev, gmu->cx_arc_votes,
gmu->gmu_freqs, gmu->nr_gmu_freqs, "cx.lvl");
+ /* Build the interconnect votes */
+ if (info->bcms && gmu->nr_gpu_bws > 1)
+ ret |= a6xx_gmu_rpmh_bw_votes_init(adreno_gpu, info, gmu);
+
return ret;
}
@@ -1442,10 +1548,43 @@ static int a6xx_gmu_build_freq_table(struct device *dev, unsigned long *freqs,
return index;
}
+static int a6xx_gmu_build_bw_table(struct device *dev, unsigned long *bandwidths,
+ u32 size)
+{
+ int count = dev_pm_opp_get_opp_count(dev);
+ struct dev_pm_opp *opp;
+ int i, index = 0;
+ unsigned int bandwidth = 1;
+
+ /*
+ * The OPP table doesn't contain the "off" bandwidth level so we need to
+ * add 1 to the table size to account for it
+ */
+
+ if (WARN(count + 1 > size,
+ "The GMU bandwidth table is being truncated\n"))
+ count = size - 1;
+
+ /* Set the "off" bandwidth */
+ bandwidths[index++] = 0;
+
+ for (i = 0; i < count; i++) {
+ opp = dev_pm_opp_find_bw_ceil(dev, &bandwidth, 0);
+ if (IS_ERR(opp))
+ break;
+
+ dev_pm_opp_put(opp);
+ bandwidths[index++] = bandwidth++;
+ }
+
+ return index;
+}
+
static int a6xx_gmu_pwrlevels_probe(struct a6xx_gmu *gmu)
{
struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
+ const struct a6xx_info *info = adreno_gpu->info->a6xx;
struct msm_gpu *gpu = &adreno_gpu->base;
int ret = 0;
@@ -1472,6 +1611,14 @@ static int a6xx_gmu_pwrlevels_probe(struct a6xx_gmu *gmu)
gmu->current_perf_index = gmu->nr_gpu_freqs - 1;
+ /*
+ * The GMU also handles GPU Interconnect Votes so build a list
+ * of DDR bandwidths from the GPU OPP table
+ */
+ if (info->bcms)
+ gmu->nr_gpu_bws = a6xx_gmu_build_bw_table(&gpu->pdev->dev,
+ gmu->gpu_bw_table, ARRAY_SIZE(gmu->gpu_bw_table));
+
/* Build the list of RPMh votes that we'll send to the GMU */
return a6xx_gmu_rpmh_votes_init(gmu);
}
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
index 88f18ea6a38a08b5b171709e5020010947a5d347..2062a2be224768c1937d7768f7b8439920e9e127 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
@@ -21,6 +21,15 @@ struct a6xx_gmu_bo {
#define GMU_MAX_GX_FREQS 16
#define GMU_MAX_CX_FREQS 4
+#define GMU_MAX_BCMS 3
+
+struct a6xx_bcm {
+ char *name;
+ unsigned int buswidth;
+ bool fixed;
+ unsigned int perfmode;
+ unsigned int perfmode_bw;
+};
/*
* These define the different GMU wake up options - these define how both the
@@ -85,6 +94,10 @@ struct a6xx_gmu {
unsigned long gpu_freqs[GMU_MAX_GX_FREQS];
u32 gx_arc_votes[GMU_MAX_GX_FREQS];
+ int nr_gpu_bws;
+ unsigned long gpu_bw_table[GMU_MAX_GX_FREQS];
+ u32 gpu_ib_votes[GMU_MAX_GX_FREQS][GMU_MAX_BCMS];
+
int nr_gmu_freqs;
unsigned long gmu_freqs[GMU_MAX_CX_FREQS];
u32 cx_arc_votes[GMU_MAX_CX_FREQS];
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
index 4aceffb6aae89c781facc2a6e4a82b20b341b6cb..9201a53dd341bf432923ffb44947e015208a3d02 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
@@ -44,6 +44,7 @@ struct a6xx_info {
u32 gmu_chipid;
u32 gmu_cgc_mode;
u32 prim_fifo_threshold;
+ const struct a6xx_bcm *bcms;
};
struct a6xx_gpu {
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v6 3/7] drm/msm: adreno: dynamically generate GMU bw table
2024-12-17 14:51 [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Neil Armstrong
2024-12-17 14:51 ` [PATCH v6 1/7] drm/msm: adreno: add defines for gpu & gmu frequency table sizes Neil Armstrong
2024-12-17 14:51 ` [PATCH v6 2/7] drm/msm: adreno: add plumbing to generate bandwidth vote table for GMU Neil Armstrong
@ 2024-12-17 14:51 ` Neil Armstrong
2024-12-17 17:52 ` Akhil P Oommen
2024-12-23 15:02 ` Konrad Dybcio
2024-12-17 14:51 ` [PATCH v6 4/7] drm/msm: adreno: find bandwidth index of OPP and set it along freq index Neil Armstrong
` (4 subsequent siblings)
7 siblings, 2 replies; 15+ messages in thread
From: Neil Armstrong @ 2024-12-17 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Simona Vetter,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree,
Neil Armstrong
The Adreno GPU Management Unit (GMU) can also scale the ddr
bandwidth along the frequency and power domain level, but for
now we statically fill the bw_table with values from the
downstream driver.
Only the first entry is used, which is a disable vote, so we
currently rely on scaling via the linux interconnect paths.
Let's dynamically generate the bw_table with the vote values
previously calculated from the OPPs.
Those entries will then be used by the GMU when passing the
appropriate bandwidth level while voting for a gpu frequency.
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
---
drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 48 ++++++++++++++++++++++++++++++++++-
1 file changed, 47 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
index cb8844ed46b29c4569d05eb7a24f7b27e173190f..995526620d678cd05020315f771213e4a6943bec 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
@@ -6,6 +6,7 @@
#include <linux/list.h>
#include <soc/qcom/cmd-db.h>
+#include <soc/qcom/tcs.h>
#include "a6xx_gmu.h"
#include "a6xx_gmu.xml.h"
@@ -259,6 +260,48 @@ static int a6xx_hfi_send_perf_table(struct a6xx_gmu *gmu)
NULL, 0);
}
+static void a6xx_generate_bw_table(const struct a6xx_info *info, struct a6xx_gmu *gmu,
+ struct a6xx_hfi_msg_bw_table *msg)
+{
+ unsigned int i, j;
+
+ for (i = 0; i < GMU_MAX_BCMS; i++) {
+ if (!info->bcms[i].name)
+ break;
+ msg->ddr_cmds_addrs[i] = cmd_db_read_addr(info->bcms[i].name);
+ }
+ msg->ddr_cmds_num = i;
+
+ for (i = 0; i < gmu->nr_gpu_bws; ++i)
+ for (j = 0; j < msg->ddr_cmds_num; j++)
+ msg->ddr_cmds_data[i][j] = gmu->gpu_ib_votes[i][j];
+ msg->bw_level_num = gmu->nr_gpu_bws;
+
+ /* Compute the wait bitmask with each BCM having the commit bit */
+ msg->ddr_wait_bitmask = 0;
+ for (j = 0; j < msg->ddr_cmds_num; j++)
+ if (msg->ddr_cmds_data[0][j] & BCM_TCS_CMD_COMMIT_MASK)
+ msg->ddr_wait_bitmask |= BIT(j);
+
+ /*
+ * These are the CX (CNOC) votes - these are used by the GMU
+ * The 'CN0' BCM is used on all targets, and votes are basically
+ * 'off' and 'on' states with first bit to enable the path.
+ */
+
+ msg->cnoc_cmds_addrs[0] = cmd_db_read_addr("CN0");
+ msg->cnoc_cmds_num = 1;
+
+ msg->cnoc_cmds_data[0][0] = BCM_TCS_CMD(true, false, 0, 0);
+ msg->cnoc_cmds_data[1][0] = BCM_TCS_CMD(true, true, 0, BIT(0));
+
+ /* Compute the wait bitmask with each BCM having the commit bit */
+ msg->cnoc_wait_bitmask = 0;
+ for (j = 0; j < msg->cnoc_cmds_num; j++)
+ if (msg->cnoc_cmds_data[0][j] & BCM_TCS_CMD_COMMIT_MASK)
+ msg->cnoc_wait_bitmask |= BIT(j);
+}
+
static void a618_build_bw_table(struct a6xx_hfi_msg_bw_table *msg)
{
/* Send a single "off" entry since the 618 GMU doesn't do bus scaling */
@@ -664,6 +707,7 @@ static int a6xx_hfi_send_bw_table(struct a6xx_gmu *gmu)
struct a6xx_hfi_msg_bw_table *msg;
struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
+ const struct a6xx_info *info = adreno_gpu->info->a6xx;
if (gmu->bw_table)
goto send;
@@ -672,7 +716,9 @@ static int a6xx_hfi_send_bw_table(struct a6xx_gmu *gmu)
if (!msg)
return -ENOMEM;
- if (adreno_is_a618(adreno_gpu))
+ if (info->bcms && gmu->nr_gpu_bws > 1)
+ a6xx_generate_bw_table(info, gmu, msg);
+ else if (adreno_is_a618(adreno_gpu))
a618_build_bw_table(msg);
else if (adreno_is_a619(adreno_gpu))
a619_build_bw_table(msg);
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v6 4/7] drm/msm: adreno: find bandwidth index of OPP and set it along freq index
2024-12-17 14:51 [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Neil Armstrong
` (2 preceding siblings ...)
2024-12-17 14:51 ` [PATCH v6 3/7] drm/msm: adreno: dynamically generate GMU bw table Neil Armstrong
@ 2024-12-17 14:51 ` Neil Armstrong
2024-12-23 14:50 ` Konrad Dybcio
2024-12-17 14:51 ` [PATCH v6 5/7] drm/msm: adreno: enable GMU bandwidth for A740 and A750 Neil Armstrong
` (3 subsequent siblings)
7 siblings, 1 reply; 15+ messages in thread
From: Neil Armstrong @ 2024-12-17 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Simona Vetter,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree,
Neil Armstrong
The Adreno GPU Management Unit (GMU) can also scale the DDR Bandwidth
along the Frequency and Power Domain level, until now we left the OPP
core scale the OPP bandwidth via the interconnect path.
In order to enable bandwidth voting via the GPU Management
Unit (GMU), when an opp is set by devfreq we also look for
the corresponding bandwidth index in the previously generated
bw_table and pass this value along the frequency index to the GMU.
The GMU also takes another vote called AB which is a 16bit quantized
value of the floor bandwidth against the maximum supported bandwidth.
The AB is calculated with a default 25% of the bandwidth like the
downstream implementation too inform the GMU firmware the minimal
quantity of bandwidth we require for this OPP. Only pass the AB
vote starting from A750 GPUs.
Since we now vote for all resources via the GMU, setting the OPP
is no more needed, so we can completely skip calling
dev_pm_opp_set_opp() in this situation.
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
---
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 39 +++++++++++++++++++++++++++++++--
drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +-
drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 6 ++---
drivers/gpu/drm/msm/adreno/a6xx_hfi.h | 5 +++++
drivers/gpu/drm/msm/adreno/adreno_gpu.h | 5 +++++
5 files changed, 51 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index b1dadafc35e95d6173019bda1105008dec1ac03a..9520fbcc89d85ee6dd4bdb17cef5f38dbf5afe6d 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -110,9 +110,11 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
bool suspended)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ const struct a6xx_info *info = adreno_gpu->info->a6xx;
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
struct a6xx_gmu *gmu = &a6xx_gpu->gmu;
u32 perf_index;
+ u32 bw_index = 0;
unsigned long gpu_freq;
int ret = 0;
@@ -125,6 +127,37 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
if (gpu_freq == gmu->gpu_freqs[perf_index])
break;
+ /* If enabled, find the corresponding DDR bandwidth index */
+ if (info->bcms && gmu->nr_gpu_bws > 1) {
+ unsigned int bw = dev_pm_opp_get_bw(opp, true, 0);
+
+ for (bw_index = 0; bw_index < gmu->nr_gpu_bws - 1; bw_index++) {
+ if (bw == gmu->gpu_bw_table[bw_index])
+ break;
+ }
+
+ /* Vote AB as a fraction of the max bandwidth, starting from A750 */
+ if (bw && adreno_is_a750_family(adreno_gpu)) {
+ u64 tmp;
+
+ /* For now, vote for 25% of the bandwidth */
+ tmp = bw * 25;
+ do_div(tmp, 100);
+
+ /*
+ * The AB vote consists of a 16 bit wide quantized level
+ * against the maximum supported bandwidth.
+ * Quantization can be calculated as below:
+ * vote = (bandwidth * 2^16) / max bandwidth
+ */
+ tmp *= MAX_AB_VOTE;
+ do_div(tmp, gmu->gpu_bw_table[gmu->nr_gpu_bws - 1]);
+
+ bw_index |= AB_VOTE(clamp(tmp, 1, MAX_AB_VOTE));
+ bw_index |= AB_VOTE_ENABLE;
+ }
+ }
+
gmu->current_perf_index = perf_index;
gmu->freq = gmu->gpu_freqs[perf_index];
@@ -140,8 +173,10 @@ void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
return;
if (!gmu->legacy) {
- a6xx_hfi_set_freq(gmu, perf_index);
- dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
+ a6xx_hfi_set_freq(gmu, perf_index, bw_index);
+ /* With Bandwidth voting, we now vote for all resources, so skip OPP set */
+ if (!bw_index)
+ dev_pm_opp_set_opp(&gpu->pdev->dev, opp);
return;
}
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
index 2062a2be224768c1937d7768f7b8439920e9e127..0c888b326cfb485400118f3601fa5f1949b03374 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
@@ -209,7 +209,7 @@ void a6xx_hfi_init(struct a6xx_gmu *gmu);
int a6xx_hfi_start(struct a6xx_gmu *gmu, int boot_state);
void a6xx_hfi_stop(struct a6xx_gmu *gmu);
int a6xx_hfi_send_prep_slumber(struct a6xx_gmu *gmu);
-int a6xx_hfi_set_freq(struct a6xx_gmu *gmu, int index);
+int a6xx_hfi_set_freq(struct a6xx_gmu *gmu, u32 perf_index, u32 bw_index);
bool a6xx_gmu_gx_is_on(struct a6xx_gmu *gmu);
bool a6xx_gmu_sptprac_is_on(struct a6xx_gmu *gmu);
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
index 995526620d678cd05020315f771213e4a6943bec..0989aee3dd2cf9bc3405c3b25a595c22e6f06387 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
@@ -772,13 +772,13 @@ static int a6xx_hfi_send_core_fw_start(struct a6xx_gmu *gmu)
sizeof(msg), NULL, 0);
}
-int a6xx_hfi_set_freq(struct a6xx_gmu *gmu, int index)
+int a6xx_hfi_set_freq(struct a6xx_gmu *gmu, u32 freq_index, u32 bw_index)
{
struct a6xx_hfi_gx_bw_perf_vote_cmd msg = { 0 };
msg.ack_type = 1; /* blocking */
- msg.freq = index;
- msg.bw = 0; /* TODO: bus scaling */
+ msg.freq = freq_index;
+ msg.bw = bw_index;
return a6xx_hfi_send_msg(gmu, HFI_H2F_MSG_GX_BW_PERF_VOTE, &msg,
sizeof(msg), NULL, 0);
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_hfi.h b/drivers/gpu/drm/msm/adreno/a6xx_hfi.h
index 528110169398f69f16443a29a1594d19c36fb595..52ba4a07d7b9a709289acd244a751ace9bdaab5d 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_hfi.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_hfi.h
@@ -173,6 +173,11 @@ struct a6xx_hfi_gx_bw_perf_vote_cmd {
u32 bw;
};
+#define AB_VOTE_MASK GENMASK(31, 16)
+#define MAX_AB_VOTE (FIELD_MAX(AB_VOTE_MASK) - 1)
+#define AB_VOTE(vote) FIELD_PREP(AB_VOTE_MASK, (vote))
+#define AB_VOTE_ENABLE BIT(8)
+
#define HFI_H2F_MSG_PREPARE_SLUMBER 33
struct a6xx_hfi_prep_slumber_cmd {
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
index e71f420f8b3a8e6cfc52dd1c4d5a63ef3704a07f..f5d6087376f52c93648e136449cfd4f703ecfb7f 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
@@ -559,6 +559,11 @@ static inline int adreno_is_a740_family(struct adreno_gpu *gpu)
gpu->info->family == ADRENO_7XX_GEN3;
}
+static inline int adreno_is_a750_family(struct adreno_gpu *gpu)
+{
+ return gpu->info->family == ADRENO_7XX_GEN3;
+}
+
static inline int adreno_is_a7xx(struct adreno_gpu *gpu)
{
/* Update with non-fake (i.e. non-A702) Gen 7 GPUs */
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v6 5/7] drm/msm: adreno: enable GMU bandwidth for A740 and A750
2024-12-17 14:51 [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Neil Armstrong
` (3 preceding siblings ...)
2024-12-17 14:51 ` [PATCH v6 4/7] drm/msm: adreno: find bandwidth index of OPP and set it along freq index Neil Armstrong
@ 2024-12-17 14:51 ` Neil Armstrong
2024-12-17 14:51 ` [PATCH v6 6/7] arm64: qcom: dts: sm8550: add interconnect and opp-peak-kBps for GPU Neil Armstrong
` (2 subsequent siblings)
7 siblings, 0 replies; 15+ messages in thread
From: Neil Armstrong @ 2024-12-17 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Simona Vetter,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree,
Neil Armstrong
Now all the DDR bandwidth voting via the GPU Management Unit (GMU)
is in place, declare the Bus Control Modules (BCMs) and the
corresponding parameters in the GPU info struct.
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
---
drivers/gpu/drm/msm/adreno/a6xx_catalog.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
index 0c560e84ad5a53bb4e8a49ba4e153ce9cf33f7ae..edffb7737a97b268bb2986d557969e651988a344 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
@@ -1388,6 +1388,17 @@ static const struct adreno_info a7xx_gpus[] = {
.pwrup_reglist = &a7xx_pwrup_reglist,
.gmu_chipid = 0x7020100,
.gmu_cgc_mode = 0x00020202,
+ .bcms = (const struct a6xx_bcm[]) {
+ { .name = "SH0", .buswidth = 16 },
+ { .name = "MC0", .buswidth = 4 },
+ {
+ .name = "ACV",
+ .fixed = true,
+ .perfmode = BIT(3),
+ .perfmode_bw = 16500000,
+ },
+ { /* sentinel */ },
+ },
},
.address_space_size = SZ_16G,
.preempt_record_size = 4192 * SZ_1K,
@@ -1432,6 +1443,17 @@ static const struct adreno_info a7xx_gpus[] = {
.pwrup_reglist = &a7xx_pwrup_reglist,
.gmu_chipid = 0x7090100,
.gmu_cgc_mode = 0x00020202,
+ .bcms = (const struct a6xx_bcm[]) {
+ { .name = "SH0", .buswidth = 16 },
+ { .name = "MC0", .buswidth = 4 },
+ {
+ .name = "ACV",
+ .fixed = true,
+ .perfmode = BIT(2),
+ .perfmode_bw = 10687500,
+ },
+ { /* sentinel */ },
+ },
},
.address_space_size = SZ_16G,
.preempt_record_size = 3572 * SZ_1K,
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v6 6/7] arm64: qcom: dts: sm8550: add interconnect and opp-peak-kBps for GPU
2024-12-17 14:51 [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Neil Armstrong
` (4 preceding siblings ...)
2024-12-17 14:51 ` [PATCH v6 5/7] drm/msm: adreno: enable GMU bandwidth for A740 and A750 Neil Armstrong
@ 2024-12-17 14:51 ` Neil Armstrong
2024-12-23 14:53 ` Konrad Dybcio
2024-12-17 14:51 ` [PATCH v6 7/7] arm64: qcom: dts: sm8650: " Neil Armstrong
2024-12-26 22:38 ` (subset) [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Bjorn Andersson
7 siblings, 1 reply; 15+ messages in thread
From: Neil Armstrong @ 2024-12-17 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Simona Vetter,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree,
Neil Armstrong
Each GPU OPP requires a specific peak DDR bandwidth, let's add
those to each OPP and also the related interconnect path.
Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
---
arch/arm64/boot/dts/qcom/sm8550.dtsi | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
index e7774d32fb6d2288748ecec00bf525b2b3c40fbb..dedd4a2a58f2c89b6e1b12d955da9ef8734604c2 100644
--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
@@ -14,6 +14,7 @@
#include <dt-bindings/firmware/qcom,scm.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
+#include <dt-bindings/interconnect/qcom,icc.h>
#include <dt-bindings/interconnect/qcom,sm8550-rpmh.h>
#include <dt-bindings/mailbox/qcom-ipcc.h>
#include <dt-bindings/power/qcom-rpmpd.h>
@@ -2114,6 +2115,10 @@ gpu: gpu@3d00000 {
qcom,gmu = <&gmu>;
#cooling-cells = <2>;
+ interconnects = <&gem_noc MASTER_GFX3D QCOM_ICC_TAG_ALWAYS
+ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ interconnect-names = "gfx-mem";
+
status = "disabled";
zap-shader {
@@ -2127,41 +2132,49 @@ gpu_opp_table: opp-table {
opp-680000000 {
opp-hz = /bits/ 64 <680000000>;
opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>;
+ opp-peak-kBps = <16500000>;
};
opp-615000000 {
opp-hz = /bits/ 64 <615000000>;
opp-level = <RPMH_REGULATOR_LEVEL_SVS_L0>;
+ opp-peak-kBps = <12449218>;
};
opp-550000000 {
opp-hz = /bits/ 64 <550000000>;
opp-level = <RPMH_REGULATOR_LEVEL_SVS>;
+ opp-peak-kBps = <10687500>;
};
opp-475000000 {
opp-hz = /bits/ 64 <475000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS_L1>;
+ opp-peak-kBps = <6074218>;
};
opp-401000000 {
opp-hz = /bits/ 64 <401000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>;
+ opp-peak-kBps = <6074218>;
};
opp-348000000 {
opp-hz = /bits/ 64 <348000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS_D0>;
+ opp-peak-kBps = <6074218>;
};
opp-295000000 {
opp-hz = /bits/ 64 <295000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS_D1>;
+ opp-peak-kBps = <6074218>;
};
opp-220000000 {
opp-hz = /bits/ 64 <220000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS_D2>;
+ opp-peak-kBps = <2136718>;
};
};
};
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v6 7/7] arm64: qcom: dts: sm8650: add interconnect and opp-peak-kBps for GPU
2024-12-17 14:51 [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Neil Armstrong
` (5 preceding siblings ...)
2024-12-17 14:51 ` [PATCH v6 6/7] arm64: qcom: dts: sm8550: add interconnect and opp-peak-kBps for GPU Neil Armstrong
@ 2024-12-17 14:51 ` Neil Armstrong
2024-12-23 14:53 ` Konrad Dybcio
2024-12-26 22:38 ` (subset) [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Bjorn Andersson
7 siblings, 1 reply; 15+ messages in thread
From: Neil Armstrong @ 2024-12-17 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Simona Vetter,
Bjorn Andersson, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree,
Neil Armstrong
Each GPU OPP requires a specific peak DDR bandwidth, let's add
those to each OPP and also the related interconnect path.
Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
---
arch/arm64/boot/dts/qcom/sm8650.dtsi | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi
index 25e47505adcb790d09f1d2726386438487255824..c76c0038c35ab048c88be9870b14c3a0b24b4183 100644
--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi
@@ -2636,6 +2636,10 @@ gpu: gpu@3d00000 {
qcom,gmu = <&gmu>;
#cooling-cells = <2>;
+ interconnects = <&gem_noc MASTER_GFX3D QCOM_ICC_TAG_ALWAYS
+ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+ interconnect-names = "gfx-mem";
+
status = "disabled";
zap-shader {
@@ -2649,56 +2653,67 @@ gpu_opp_table: opp-table {
opp-231000000 {
opp-hz = /bits/ 64 <231000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS_D2>;
+ opp-peak-kBps = <2136718>;
};
opp-310000000 {
opp-hz = /bits/ 64 <310000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS_D1>;
+ opp-peak-kBps = <2136718>;
};
opp-366000000 {
opp-hz = /bits/ 64 <366000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS_D0>;
+ opp-peak-kBps = <6074218>;
};
opp-422000000 {
opp-hz = /bits/ 64 <422000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>;
+ opp-peak-kBps = <8171875>;
};
opp-500000000 {
opp-hz = /bits/ 64 <500000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS_L1>;
+ opp-peak-kBps = <8171875>;
};
opp-578000000 {
opp-hz = /bits/ 64 <578000000>;
opp-level = <RPMH_REGULATOR_LEVEL_SVS>;
+ opp-peak-kBps = <8171875>;
};
opp-629000000 {
opp-hz = /bits/ 64 <629000000>;
opp-level = <RPMH_REGULATOR_LEVEL_SVS_L0>;
+ opp-peak-kBps = <10687500>;
};
opp-680000000 {
opp-hz = /bits/ 64 <680000000>;
opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>;
+ opp-peak-kBps = <12449218>;
};
opp-720000000 {
opp-hz = /bits/ 64 <720000000>;
opp-level = <RPMH_REGULATOR_LEVEL_SVS_L2>;
+ opp-peak-kBps = <12449218>;
};
opp-770000000 {
opp-hz = /bits/ 64 <770000000>;
opp-level = <RPMH_REGULATOR_LEVEL_NOM>;
+ opp-peak-kBps = <12449218>;
};
opp-834000000 {
opp-hz = /bits/ 64 <834000000>;
opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>;
+ opp-peak-kBps = <14398437>;
};
};
};
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v6 3/7] drm/msm: adreno: dynamically generate GMU bw table
2024-12-17 14:51 ` [PATCH v6 3/7] drm/msm: adreno: dynamically generate GMU bw table Neil Armstrong
@ 2024-12-17 17:52 ` Akhil P Oommen
2024-12-23 15:02 ` Konrad Dybcio
1 sibling, 0 replies; 15+ messages in thread
From: Akhil P Oommen @ 2024-12-17 17:52 UTC (permalink / raw)
To: Neil Armstrong, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Simona Vetter, Bjorn Andersson, Rob Herring, Krzysztof Kozlowski,
Conor Dooley
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree
On 12/17/2024 8:21 PM, Neil Armstrong wrote:
> The Adreno GPU Management Unit (GMU) can also scale the ddr
> bandwidth along the frequency and power domain level, but for
> now we statically fill the bw_table with values from the
> downstream driver.
>
> Only the first entry is used, which is a disable vote, so we
> currently rely on scaling via the linux interconnect paths.
>
> Let's dynamically generate the bw_table with the vote values
> previously calculated from the OPPs.
>
> Those entries will then be used by the GMU when passing the
> appropriate bandwidth level while voting for a gpu frequency.
>
> Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
-Akhil
> ---
> drivers/gpu/drm/msm/adreno/a6xx_hfi.c | 48 ++++++++++++++++++++++++++++++++++-
> 1 file changed, 47 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
> index cb8844ed46b29c4569d05eb7a24f7b27e173190f..995526620d678cd05020315f771213e4a6943bec 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_hfi.c
> @@ -6,6 +6,7 @@
> #include <linux/list.h>
>
> #include <soc/qcom/cmd-db.h>
> +#include <soc/qcom/tcs.h>
>
> #include "a6xx_gmu.h"
> #include "a6xx_gmu.xml.h"
> @@ -259,6 +260,48 @@ static int a6xx_hfi_send_perf_table(struct a6xx_gmu *gmu)
> NULL, 0);
> }
>
> +static void a6xx_generate_bw_table(const struct a6xx_info *info, struct a6xx_gmu *gmu,
> + struct a6xx_hfi_msg_bw_table *msg)
> +{
> + unsigned int i, j;
> +
> + for (i = 0; i < GMU_MAX_BCMS; i++) {
> + if (!info->bcms[i].name)
> + break;
> + msg->ddr_cmds_addrs[i] = cmd_db_read_addr(info->bcms[i].name);
> + }
> + msg->ddr_cmds_num = i;
> +
> + for (i = 0; i < gmu->nr_gpu_bws; ++i)
> + for (j = 0; j < msg->ddr_cmds_num; j++)
> + msg->ddr_cmds_data[i][j] = gmu->gpu_ib_votes[i][j];
> + msg->bw_level_num = gmu->nr_gpu_bws;
> +
> + /* Compute the wait bitmask with each BCM having the commit bit */
> + msg->ddr_wait_bitmask = 0;
> + for (j = 0; j < msg->ddr_cmds_num; j++)
> + if (msg->ddr_cmds_data[0][j] & BCM_TCS_CMD_COMMIT_MASK)
> + msg->ddr_wait_bitmask |= BIT(j);
> +
> + /*
> + * These are the CX (CNOC) votes - these are used by the GMU
> + * The 'CN0' BCM is used on all targets, and votes are basically
> + * 'off' and 'on' states with first bit to enable the path.
> + */
> +
> + msg->cnoc_cmds_addrs[0] = cmd_db_read_addr("CN0");
> + msg->cnoc_cmds_num = 1;
> +
> + msg->cnoc_cmds_data[0][0] = BCM_TCS_CMD(true, false, 0, 0);
> + msg->cnoc_cmds_data[1][0] = BCM_TCS_CMD(true, true, 0, BIT(0));
> +
> + /* Compute the wait bitmask with each BCM having the commit bit */
> + msg->cnoc_wait_bitmask = 0;
> + for (j = 0; j < msg->cnoc_cmds_num; j++)
> + if (msg->cnoc_cmds_data[0][j] & BCM_TCS_CMD_COMMIT_MASK)
> + msg->cnoc_wait_bitmask |= BIT(j);
> +}
> +
> static void a618_build_bw_table(struct a6xx_hfi_msg_bw_table *msg)
> {
> /* Send a single "off" entry since the 618 GMU doesn't do bus scaling */
> @@ -664,6 +707,7 @@ static int a6xx_hfi_send_bw_table(struct a6xx_gmu *gmu)
> struct a6xx_hfi_msg_bw_table *msg;
> struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
> struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> + const struct a6xx_info *info = adreno_gpu->info->a6xx;
>
> if (gmu->bw_table)
> goto send;
> @@ -672,7 +716,9 @@ static int a6xx_hfi_send_bw_table(struct a6xx_gmu *gmu)
> if (!msg)
> return -ENOMEM;
>
> - if (adreno_is_a618(adreno_gpu))
> + if (info->bcms && gmu->nr_gpu_bws > 1)
> + a6xx_generate_bw_table(info, gmu, msg);
> + else if (adreno_is_a618(adreno_gpu))
> a618_build_bw_table(msg);
> else if (adreno_is_a619(adreno_gpu))
> a619_build_bw_table(msg);
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v6 4/7] drm/msm: adreno: find bandwidth index of OPP and set it along freq index
2024-12-17 14:51 ` [PATCH v6 4/7] drm/msm: adreno: find bandwidth index of OPP and set it along freq index Neil Armstrong
@ 2024-12-23 14:50 ` Konrad Dybcio
0 siblings, 0 replies; 15+ messages in thread
From: Konrad Dybcio @ 2024-12-23 14:50 UTC (permalink / raw)
To: Neil Armstrong, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Simona Vetter, Bjorn Andersson, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree
On 17.12.2024 3:51 PM, Neil Armstrong wrote:
> The Adreno GPU Management Unit (GMU) can also scale the DDR Bandwidth
> along the Frequency and Power Domain level, until now we left the OPP
> core scale the OPP bandwidth via the interconnect path.
>
> In order to enable bandwidth voting via the GPU Management
> Unit (GMU), when an opp is set by devfreq we also look for
> the corresponding bandwidth index in the previously generated
> bw_table and pass this value along the frequency index to the GMU.
>
> The GMU also takes another vote called AB which is a 16bit quantized
> value of the floor bandwidth against the maximum supported bandwidth.
>
> The AB is calculated with a default 25% of the bandwidth like the
> downstream implementation too inform the GMU firmware the minimal
> quantity of bandwidth we require for this OPP. Only pass the AB
> vote starting from A750 GPUs.
>
> Since we now vote for all resources via the GMU, setting the OPP
> is no more needed, so we can completely skip calling
> dev_pm_opp_set_opp() in this situation.
>
> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
> Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
> ---
Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Konrad
>
> +#define AB_VOTE_MASK GENMASK(31, 16)
> +#define MAX_AB_VOTE (FIELD_MAX(AB_VOTE_MASK) - 1)
I'm just not 1000% sure about this -1 here
Konrad
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v6 6/7] arm64: qcom: dts: sm8550: add interconnect and opp-peak-kBps for GPU
2024-12-17 14:51 ` [PATCH v6 6/7] arm64: qcom: dts: sm8550: add interconnect and opp-peak-kBps for GPU Neil Armstrong
@ 2024-12-23 14:53 ` Konrad Dybcio
0 siblings, 0 replies; 15+ messages in thread
From: Konrad Dybcio @ 2024-12-23 14:53 UTC (permalink / raw)
To: Neil Armstrong, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Simona Vetter, Bjorn Andersson, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree
On 17.12.2024 3:51 PM, Neil Armstrong wrote:
> Each GPU OPP requires a specific peak DDR bandwidth, let's add
> those to each OPP and also the related interconnect path.
>
> Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
> Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
> ---
Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Konrad
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v6 7/7] arm64: qcom: dts: sm8650: add interconnect and opp-peak-kBps for GPU
2024-12-17 14:51 ` [PATCH v6 7/7] arm64: qcom: dts: sm8650: " Neil Armstrong
@ 2024-12-23 14:53 ` Konrad Dybcio
0 siblings, 0 replies; 15+ messages in thread
From: Konrad Dybcio @ 2024-12-23 14:53 UTC (permalink / raw)
To: Neil Armstrong, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Simona Vetter, Bjorn Andersson, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree
On 17.12.2024 3:51 PM, Neil Armstrong wrote:
> Each GPU OPP requires a specific peak DDR bandwidth, let's add
> those to each OPP and also the related interconnect path.
>
> Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
> ---
Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Konrad
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v6 2/7] drm/msm: adreno: add plumbing to generate bandwidth vote table for GMU
2024-12-17 14:51 ` [PATCH v6 2/7] drm/msm: adreno: add plumbing to generate bandwidth vote table for GMU Neil Armstrong
@ 2024-12-23 14:54 ` Konrad Dybcio
0 siblings, 0 replies; 15+ messages in thread
From: Konrad Dybcio @ 2024-12-23 14:54 UTC (permalink / raw)
To: Neil Armstrong, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Simona Vetter, Bjorn Andersson, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree
On 17.12.2024 3:51 PM, Neil Armstrong wrote:
> The Adreno GPU Management Unit (GMU) can also scale DDR Bandwidth along
> the Frequency and Power Domain level, but by default we leave the
> OPP core scale the interconnect ddr path.
>
> While scaling via the interconnect path was sufficient, newer GPUs
> like the A750 requires specific vote paremeters and bandwidth to
> achieve full functionality.
>
> In order to calculate vote values used by the GPU Management
> Unit (GMU), we need to parse all the possible OPP Bandwidths and
> create a vote value to be sent to the appropriate Bus Control
> Modules (BCMs) declared in the GPU info struct.
>
> This vote value is called IB, while on the other side the GMU also
> takes another vote called AB which is a 16bit quantized value
> of the floor bandwidth against the maximum supported bandwidth.
> The AB vote will be calculated later when setting the frequency.
>
> The vote array will then be used to dynamically generate the GMU
> bw_table sent during the GMU power-up.
>
> Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
> Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
> ---
Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Konrad
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v6 3/7] drm/msm: adreno: dynamically generate GMU bw table
2024-12-17 14:51 ` [PATCH v6 3/7] drm/msm: adreno: dynamically generate GMU bw table Neil Armstrong
2024-12-17 17:52 ` Akhil P Oommen
@ 2024-12-23 15:02 ` Konrad Dybcio
1 sibling, 0 replies; 15+ messages in thread
From: Konrad Dybcio @ 2024-12-23 15:02 UTC (permalink / raw)
To: Neil Armstrong, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Simona Vetter, Bjorn Andersson, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Akhil P Oommen
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree
On 17.12.2024 3:51 PM, Neil Armstrong wrote:
> The Adreno GPU Management Unit (GMU) can also scale the ddr
> bandwidth along the frequency and power domain level, but for
> now we statically fill the bw_table with values from the
> downstream driver.
>
> Only the first entry is used, which is a disable vote, so we
> currently rely on scaling via the linux interconnect paths.
>
> Let's dynamically generate the bw_table with the vote values
> previously calculated from the OPPs.
>
> Those entries will then be used by the GMU when passing the
> appropriate bandwidth level while voting for a gpu frequency.
>
> Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org>
> ---
[...]
> + /*
> + * These are the CX (CNOC) votes - these are used by the GMU
> + * The 'CN0' BCM is used on all targets, and votes are basically
> + * 'off' and 'on' states with first bit to enable the path.
> + */
> +
> + msg->cnoc_cmds_addrs[0] = cmd_db_read_addr("CN0");
> + msg->cnoc_cmds_num = 1;
> +
> + msg->cnoc_cmds_data[0][0] = BCM_TCS_CMD(true, false, 0, 0);
> + msg->cnoc_cmds_data[1][0] = BCM_TCS_CMD(true, true, 0, BIT(0));
> +
> + /* Compute the wait bitmask with each BCM having the commit bit */
> + msg->cnoc_wait_bitmask = 0;
> + for (j = 0; j < msg->cnoc_cmds_num; j++)
> + if (msg->cnoc_cmds_data[0][j] & BCM_TCS_CMD_COMMIT_MASK)
> + msg->cnoc_wait_bitmask |= BIT(j);
Still very much not a fan of this.
I think this would be equally telling:
/* Always flush on/off commands */
msg->cnoc_wait_bitmask = BIT(0);
with or without that:
Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Konrad
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: (subset) [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU
2024-12-17 14:51 [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Neil Armstrong
` (6 preceding siblings ...)
2024-12-17 14:51 ` [PATCH v6 7/7] arm64: qcom: dts: sm8650: " Neil Armstrong
@ 2024-12-26 22:38 ` Bjorn Andersson
7 siblings, 0 replies; 15+ messages in thread
From: Bjorn Andersson @ 2024-12-26 22:38 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Simona Vetter,
Rob Herring, Krzysztof Kozlowski, Conor Dooley, Akhil P Oommen,
Neil Armstrong
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, devicetree
On Tue, 17 Dec 2024 15:51:13 +0100, Neil Armstrong wrote:
> The Adreno GPU Management Unit (GMU) can also vote for DDR Bandwidth
> along the Frequency and Power Domain level, but by default we leave the
> OPP core scale the interconnect ddr path.
>
> While scaling the interconnect path was sufficient, newer GPUs
> like the A750 requires specific vote parameters and bandwidth to
> achieve full functionnality.
>
> [...]
Applied, thanks!
[6/7] arm64: qcom: dts: sm8550: add interconnect and opp-peak-kBps for GPU
commit: 1ba40079267930643eade4282258562085d4319d
[7/7] arm64: qcom: dts: sm8650: add interconnect and opp-peak-kBps for GPU
commit: 63c21d61b46197b6295e12dbf29adff29c18ae2c
Best regards,
--
Bjorn Andersson <andersson@kernel.org>
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2024-12-26 22:39 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-17 14:51 [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Neil Armstrong
2024-12-17 14:51 ` [PATCH v6 1/7] drm/msm: adreno: add defines for gpu & gmu frequency table sizes Neil Armstrong
2024-12-17 14:51 ` [PATCH v6 2/7] drm/msm: adreno: add plumbing to generate bandwidth vote table for GMU Neil Armstrong
2024-12-23 14:54 ` Konrad Dybcio
2024-12-17 14:51 ` [PATCH v6 3/7] drm/msm: adreno: dynamically generate GMU bw table Neil Armstrong
2024-12-17 17:52 ` Akhil P Oommen
2024-12-23 15:02 ` Konrad Dybcio
2024-12-17 14:51 ` [PATCH v6 4/7] drm/msm: adreno: find bandwidth index of OPP and set it along freq index Neil Armstrong
2024-12-23 14:50 ` Konrad Dybcio
2024-12-17 14:51 ` [PATCH v6 5/7] drm/msm: adreno: enable GMU bandwidth for A740 and A750 Neil Armstrong
2024-12-17 14:51 ` [PATCH v6 6/7] arm64: qcom: dts: sm8550: add interconnect and opp-peak-kBps for GPU Neil Armstrong
2024-12-23 14:53 ` Konrad Dybcio
2024-12-17 14:51 ` [PATCH v6 7/7] arm64: qcom: dts: sm8650: " Neil Armstrong
2024-12-23 14:53 ` Konrad Dybcio
2024-12-26 22:38 ` (subset) [PATCH v6 0/7] drm/msm: adreno: add support for DDR bandwidth scaling via GMU Bjorn Andersson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).