From: Lucas Stach <l.stach@pengutronix.de>
To: Sui Jingfeng <suijingfeng@loongson.cn>, etnaviv@lists.freedesktop.org
Cc: Russell King <linux+etnaviv@armlinux.org.uk>,
dri-devel@lists.freedesktop.org, kernel@pengutronix.de,
patchwork-lst@pengutronix.de
Subject: Re: drm/etnaviv: slow down FE idle polling
Date: Thu, 15 Jun 2023 11:04:40 +0200 [thread overview]
Message-ID: <d17de4ebfd08faa23238ece2ad0b737bf271498b.camel@pengutronix.de> (raw)
In-Reply-To: <8c36b8bc-5a0d-75f7-265c-b0191979165a@loongson.cn>
Am Donnerstag, dem 15.06.2023 um 12:09 +0800 schrieb Sui Jingfeng:
> Hi,
>
> On 2023/6/7 20:59, Lucas Stach wrote:
> > Currently the FE is spinning way too fast when polling for new work in
> 'way' -> 'away'
> > the FE idleloop.
> 'idleloop' -> 'idle loop'
> > As each poll fetches 16 bytes from memory, a GPU running
> > at 1GHz with the current setting of 200 wait cycle between fetches causes
> > 80 MB/s of memory traffic just to check for new work when the GPU is
> > otherwise idle, which is more FE traffic than in some GPU loaded cases.
> >
> > Significantly increase the number of wait cycles to slow down the poll
> > interval to ~30µs, limiting the FE idle memory traffic to 512 KB/s, while
> > providing a max latency which should not hurt most use-cases. The FE WAIT
> > command seems to have some unknown discrete steps in the wait cycles
> add a comma here.
> > so
> > we may over/undershoot the target a bit, but that should be harmless.
> overshoot or undershoot
> > Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
> > Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
> > ---
> > drivers/gpu/drm/etnaviv/etnaviv_buffer.c | 11 ++++++-----
> > drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 7 +++++++
> > drivers/gpu/drm/etnaviv/etnaviv_gpu.h | 1 +
> > 3 files changed, 14 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
> > index cf741c5c82d2..384df1659be6 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
> > @@ -53,11 +53,12 @@ static inline void CMD_END(struct etnaviv_cmdbuf *buffer)
> > OUT(buffer, VIV_FE_END_HEADER_OP_END);
> > }
> >
> > -static inline void CMD_WAIT(struct etnaviv_cmdbuf *buffer)
> > +static inline void CMD_WAIT(struct etnaviv_cmdbuf *buffer,
> > + unsigned int waitcycles)
> > {
> > buffer->user_size = ALIGN(buffer->user_size, 8);
> >
> > - OUT(buffer, VIV_FE_WAIT_HEADER_OP_WAIT | 200);
> > + OUT(buffer, VIV_FE_WAIT_HEADER_OP_WAIT | waitcycles);
> > }
> >
> > static inline void CMD_LINK(struct etnaviv_cmdbuf *buffer,
> > @@ -168,7 +169,7 @@ u16 etnaviv_buffer_init(struct etnaviv_gpu *gpu)
> > /* initialize buffer */
> > buffer->user_size = 0;
> >
> > - CMD_WAIT(buffer);
> > + CMD_WAIT(buffer, gpu->fe_waitcycles);
> > CMD_LINK(buffer, 2,
> > etnaviv_cmdbuf_get_va(buffer, &gpu->mmu_context->cmdbuf_mapping)
> > + buffer->user_size - 4);
> > @@ -320,7 +321,7 @@ void etnaviv_sync_point_queue(struct etnaviv_gpu *gpu, unsigned int event)
> > CMD_END(buffer);
> >
> > /* Append waitlink */
> > - CMD_WAIT(buffer);
> > + CMD_WAIT(buffer, gpu->fe_waitcycles);
> > CMD_LINK(buffer, 2,
> > etnaviv_cmdbuf_get_va(buffer, &gpu->mmu_context->cmdbuf_mapping)
> > + buffer->user_size - 4);
> > @@ -503,7 +504,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
> >
> > CMD_LOAD_STATE(buffer, VIVS_GL_EVENT, VIVS_GL_EVENT_EVENT_ID(event) |
> > VIVS_GL_EVENT_FROM_PE);
> > - CMD_WAIT(buffer);
> > + CMD_WAIT(buffer, gpu->fe_waitcycles);
> > CMD_LINK(buffer, 2,
> > etnaviv_cmdbuf_get_va(buffer, &gpu->mmu_context->cmdbuf_mapping)
> > + buffer->user_size - 4);
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> > index 41aab1aa330b..8c20dff32240 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
> > @@ -493,6 +493,13 @@ static void etnaviv_gpu_update_clock(struct etnaviv_gpu *gpu)
> > clock |= VIVS_HI_CLOCK_CONTROL_FSCALE_VAL(fscale);
> > etnaviv_gpu_load_clock(gpu, clock);
> > }
> > +
> > + /*
> > + * Choose number of wait cycles to target a ~30us (1/32768) max latency
> > + * until new work is picked up by the FE when it polls in the idle loop.
> > + */
> > + gpu->fe_waitcycles = min(gpu->base_rate_core >> (15 - gpu->freq_scale),
> > + 0xffffUL);
>
> This patch is NOT effective on our hardware GC1000 v5037 (ls7a1000 +
> ls3a5000).
>
> As the gpu->base_rate_core is 0, so, in the end gpu->fe_waitcycles is
> also zero.
>
Uh, that's a problem, as the patch will then have the opposite effect
on your platform by speeding up the idle loop. Thanks for catching
this! I'll improve the patch to keep a reasonable amount of wait cycles
in this case.
Regards,
Lucas
> But after apply this path, the glmark2 still run happily, no influence. So
>
>
> Tested-by: Sui Jingfeng <suijingfeng@loongson.cn>
>
> > }
> >
> > static int etnaviv_hw_reset(struct etnaviv_gpu *gpu)
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
> > index 98c6f9c320fc..e1e1de59c38d 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h
> > @@ -150,6 +150,7 @@ struct etnaviv_gpu {
> > struct clk *clk_shader;
> >
> > unsigned int freq_scale;
> > + unsigned int fe_waitcycles;
> > unsigned long base_rate_core;
> > unsigned long base_rate_shader;
> > };
>
next prev parent reply other threads:[~2023-06-15 9:04 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-07 12:59 [PATCH] drm/etnaviv: slow down FE idle polling Lucas Stach
2023-06-14 18:37 ` Christian Gmeiner
2023-06-15 4:09 ` Sui Jingfeng
2023-06-15 9:04 ` Lucas Stach [this message]
2023-06-15 9:16 ` Sui Jingfeng
2023-06-15 9:20 ` Christian Gmeiner
2023-06-15 9:37 ` Sui Jingfeng
2023-06-15 9:53 ` Lucas Stach
2023-06-15 13:41 ` Chris Healy
2023-06-15 13:51 ` Sui Jingfeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d17de4ebfd08faa23238ece2ad0b737bf271498b.camel@pengutronix.de \
--to=l.stach@pengutronix.de \
--cc=dri-devel@lists.freedesktop.org \
--cc=etnaviv@lists.freedesktop.org \
--cc=kernel@pengutronix.de \
--cc=linux+etnaviv@armlinux.org.uk \
--cc=patchwork-lst@pengutronix.de \
--cc=suijingfeng@loongson.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).