From: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
To: Kuogee Hsieh <quic_khsieh@quicinc.com>
Cc: dri-devel@lists.freedesktop.org, robdclark@gmail.com,
sean@poorly.run, swboyd@chromium.org, dianders@chromium.org,
vkoul@kernel.org, daniel@ffwll.ch, airlied@gmail.com,
agross@kernel.org, andersson@kernel.org,
quic_abhinavk@quicinc.com, quic_jesszhan@quicinc.com,
quic_sbillaka@quicinc.com, marijn.suijten@somainline.org,
freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v6] drm/msm/dpu: improve DSC allocation
Date: Wed, 20 Dec 2023 00:32:01 +0200 [thread overview]
Message-ID: <CAA8EJpoBiiTbc91E8hSK0seBOXAW++8V8yJzbGmCJJcXbZ3raQ@mail.gmail.com> (raw)
In-Reply-To: <a5ec8760-cdfe-b420-43c1-913b0095ba93@quicinc.com>
On Tue, 19 Dec 2023 at 18:18, Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>
> Hi Dmitry,
>
> Anymore comments from you?
No, for some reason I missed this patch. Please excuse me.
> On 12/14/2023 10:56 AM, Kuogee Hsieh wrote:
> > At DSC V1.1 DCE (Display Compression Engine) contains a DSC encoder.
> > However, at DSC V1.2 DCE consists of two DSC encoders, one has an odd
> > index and another one has an even index. Each encoder can work
> > independently. But only two DSC encoders from same DCE can be paired
> > to work together to support DSC merge mode at DSC V1.2. For DSC V1.1
> > two consecutive DSC encoders (start with even index) have to be paired
> > to support DSC merge mode. In addition, the DSC with even index have
> > to be mapped to even PINGPONG index and DSC with odd index have to be
> > mapped to odd PINGPONG index at its data path in regardless of DSC
> > V1.1 or V1.2. This patch improves DSC allocation mechanism with
> > consideration of those factors.
> >
> > Changes in V6:
> > -- rename _dpu_rm_reserve_dsc_single to _dpu_rm_dsc_alloc
> > -- rename _dpu_rm_reserve_dsc_pair to _dpu_rm_dsc_alloc_pair
> > -- pass global_state to _dpu_rm_pingpong_next_index()
> > -- remove pp_max
> > -- fix for loop condition check at _dpu_rm_dsc_alloc()
> >
> > Changes in V5:
> > -- delete dsc_id[]
> > -- update to global_state->dsc_to_enc_id[] directly
> > -- replace ndx with idx
> > -- fix indentation at function declaration
> > -- only one for loop at _dpu_rm_reserve_dsc_single()
> >
> > Changes in V4:
> > -- rework commit message
> > -- use reserved_by_other()
> > -- add _dpu_rm_pingpong_next_index()
> > -- revise _dpu_rm_pingpong_dsc_check()
> >
> > Changes in V3:
> > -- add dpu_rm_pingpong_dsc_check()
> > -- for pair allocation use i += 2 at for loop
> >
> > Changes in V2:
> > -- split _dpu_rm_reserve_dsc() into _dpu_rm_reserve_dsc_single() and
> > _dpu_rm_reserve_dsc_pair()
> >
> > Fixes: f2803ee91a41 ("drm/msm/disp/dpu1: Add DSC support in RM")
> > Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
> > ---
> > drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c | 154 +++++++++++++++++++++++++++++----
> > 1 file changed, 139 insertions(+), 15 deletions(-)
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
See below for minor nit.
> >
> > diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
> > index f9215643..0ce2a25 100644
> > --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
> > +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
> > @@ -461,29 +461,153 @@ static int _dpu_rm_reserve_ctls(
> > return 0;
> > }
> >
> > -static int _dpu_rm_reserve_dsc(struct dpu_rm *rm,
> > - struct dpu_global_state *global_state,
> > - struct drm_encoder *enc,
> > - const struct msm_display_topology *top)
> > +static int _dpu_rm_pingpong_next_index(struct dpu_global_state *global_state,
> > + int start,
I'd still prefer to see `enum dpu_pingpong` as a parameter here
instead of just an index, but this is just my taste.
> > + uint32_t enc_id)
> > {
> > - int num_dsc = top->num_dsc;
> > int i;
> >
> > - /* check if DSC required are allocated or not */
> > - for (i = 0; i < num_dsc; i++) {
> > - if (!rm->dsc_blks[i]) {
> > - DPU_ERROR("DSC %d does not exist\n", i);
> > - return -EIO;
> > + for (i = start; i < (PINGPONG_MAX - PINGPONG_0); i++) {
> > + if (global_state->pingpong_to_enc_id[i] == enc_id)
> > + return i;
> > + }
> > +
> > + return -ENAVAIL;
> > +}
> > +
> > +static int _dpu_rm_pingpong_dsc_check(int dsc_idx, int pp_idx)
> > +{
> > + /*
> > + * DSC with even index must be used with the PINGPONG with even index
> > + * DSC with odd index must be used with the PINGPONG with odd index
> > + */
> > + if ((dsc_idx & 0x01) != (pp_idx & 0x01))
> > + return -ENAVAIL;
> > +
> > + return 0;
> > +}
> > +
> > +static int _dpu_rm_dsc_alloc(struct dpu_rm *rm,
> > + struct dpu_global_state *global_state,
> > + uint32_t enc_id,
> > + const struct msm_display_topology *top)
> > +{
> > + int num_dsc = 0;
> > + int pp_idx = 0;
> > + int dsc_idx;
> > + int ret;
> > +
> > + for (dsc_idx = 0; dsc_idx < ARRAY_SIZE(rm->dsc_blks) &&
> > + num_dsc < top->num_dsc; dsc_idx++) {
> > + if (!rm->dsc_blks[dsc_idx])
> > + continue;
> > +
> > + if (reserved_by_other(global_state->dsc_to_enc_id, dsc_idx, enc_id))
> > + continue;
> > +
> > + pp_idx = _dpu_rm_pingpong_next_index(global_state, pp_idx, enc_id);
> > + if (pp_idx < 0)
> > + return -ENAVAIL;
> > +
> > + ret = _dpu_rm_pingpong_dsc_check(dsc_idx, pp_idx);
> > + if (ret)
> > + return -ENAVAIL;
> > +
> > + global_state->dsc_to_enc_id[dsc_idx] = enc_id;
> > + num_dsc++;
> > + pp_idx++;
> > + }
> > +
> > + if (num_dsc < top->num_dsc) {
> > + DPU_ERROR("DSC allocation failed num_dsc=%d required=%d\n",
> > + num_dsc, top->num_dsc);
> > + return -ENAVAIL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int _dpu_rm_dsc_alloc_pair(struct dpu_rm *rm,
> > + struct dpu_global_state *global_state,
> > + uint32_t enc_id,
> > + const struct msm_display_topology *top)
> > +{
> > + int num_dsc = 0;
> > + int dsc_idx, pp_idx = 0;
> > + int ret;
> > +
> > + /* only start from even dsc index */
> > + for (dsc_idx = 0; dsc_idx < ARRAY_SIZE(rm->dsc_blks) &&
> > + num_dsc < top->num_dsc; dsc_idx += 2) {
> > + if (!rm->dsc_blks[dsc_idx] ||
> > + !rm->dsc_blks[dsc_idx + 1])
> > + continue;
> > +
> > + /* consective dsc index to be paired */
> > + if (reserved_by_other(global_state->dsc_to_enc_id, dsc_idx, enc_id) ||
> > + reserved_by_other(global_state->dsc_to_enc_id, dsc_idx + 1, enc_id))
> > + continue;
> > +
> > + pp_idx = _dpu_rm_pingpong_next_index(global_state, pp_idx, enc_id);
> > + if (pp_idx < 0)
> > + return -ENAVAIL;
> > +
> > + ret = _dpu_rm_pingpong_dsc_check(dsc_idx, pp_idx);
> > + if (ret) {
> > + pp_idx = 0;
> > + continue;
> > }
> >
> > - if (global_state->dsc_to_enc_id[i]) {
> > - DPU_ERROR("DSC %d is already allocated\n", i);
> > - return -EIO;
> > + pp_idx = _dpu_rm_pingpong_next_index(global_state, pp_idx + 1, enc_id);
> > + if (pp_idx < 0)
> > + return -ENAVAIL;
> > +
> > + ret = _dpu_rm_pingpong_dsc_check(dsc_idx + 1, pp_idx);
> > + if (ret) {
> > + pp_idx = 0;
> > + continue;
> > }
> > +
> > + global_state->dsc_to_enc_id[dsc_idx] = enc_id;
> > + global_state->dsc_to_enc_id[dsc_idx + 1] = enc_id;
> > + num_dsc += 2;
> > + pp_idx++; /* start for next pair */
> > }
> >
> > - for (i = 0; i < num_dsc; i++)
> > - global_state->dsc_to_enc_id[i] = enc->base.id;
> > + if (num_dsc < top->num_dsc) {
> > + DPU_ERROR("DSC allocation failed num_dsc=%d required=%d\n",
> > + num_dsc, top->num_dsc);
> > + return -ENAVAIL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int _dpu_rm_reserve_dsc(struct dpu_rm *rm,
> > + struct dpu_global_state *global_state,
> > + struct drm_encoder *enc,
> > + const struct msm_display_topology *top)
> > +{
> > + uint32_t enc_id = enc->base.id;
> > +
> > + if (!top->num_dsc || !top->num_intf)
> > + return 0;
> > +
> > + /*
> > + * Facts:
> > + * 1) no pingpong split (two layer mixers shared one pingpong)
> > + * 2) DSC pair starts from even index, such as index(0,1), (2,3), etc
> > + * 3) even PINGPONG connects to even DSC
> > + * 4) odd PINGPONG connects to odd DSC
> > + * 5) pair: encoder +--> pp_idx_0 --> dsc_idx_0
> > + * +--> pp_idx_1 --> dsc_idx_1
> > + */
> > +
> > + /* num_dsc should be either 1, 2 or 4 */
> > + if (top->num_dsc > top->num_intf) /* merge mode */
> > + return _dpu_rm_dsc_alloc_pair(rm, global_state, enc_id, top);
> > + else
> > + return _dpu_rm_dsc_alloc(rm, global_state, enc_id, top);
> >
> > return 0;
> > }
--
With best wishes
Dmitry
next prev parent reply other threads:[~2023-12-19 22:32 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-14 18:56 [PATCH v6] drm/msm/dpu: improve DSC allocation Kuogee Hsieh
2023-12-19 16:17 ` Kuogee Hsieh
2023-12-19 22:32 ` Dmitry Baryshkov [this message]
2023-12-20 16:48 ` Kuogee Hsieh
2024-02-19 12:30 ` Dmitry Baryshkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAA8EJpoBiiTbc91E8hSK0seBOXAW++8V8yJzbGmCJJcXbZ3raQ@mail.gmail.com \
--to=dmitry.baryshkov@linaro.org \
--cc=agross@kernel.org \
--cc=airlied@gmail.com \
--cc=andersson@kernel.org \
--cc=daniel@ffwll.ch \
--cc=dianders@chromium.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=freedreno@lists.freedesktop.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=marijn.suijten@somainline.org \
--cc=quic_abhinavk@quicinc.com \
--cc=quic_jesszhan@quicinc.com \
--cc=quic_khsieh@quicinc.com \
--cc=quic_sbillaka@quicinc.com \
--cc=robdclark@gmail.com \
--cc=sean@poorly.run \
--cc=swboyd@chromium.org \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).