From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756638AbcDECFO (ORCPT ); Mon, 4 Apr 2016 22:05:14 -0400 Received: from regular1.263xmail.com ([211.150.99.137]:52688 "EHLO regular1.263xmail.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1751368AbcDECFL (ORCPT ); Mon, 4 Apr 2016 22:05:11 -0400 X-263anti-spam: KSV:0; X-MAIL-GRAY: 0 X-MAIL-DELIVERY: 1 X-ABS-CHECKED: 4 X-ADDR-CHECKED: 0 X-KSVirus-check: 0 X-RL-SENDER: zhangqing@rock-chips.com X-FST-TO: linux-kernel@vger.kernel.org X-SENDER-IP: 58.22.7.114 X-LOGIN-NAME: zhangqing@rock-chips.com X-UNIQUE-TAG: <913912bc8e636d41ea7ca4a164b3a2e8> X-ATTACHMENT-NUM: 0 X-DNS-TYPE: 0 Message-ID: <57031B80.1080602@rock-chips.com> Date: Tue, 05 Apr 2016 09:57:20 +0800 From: Elaine Zhang User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Heiko Stuebner CC: khilman@baylibre.com, xf@rock-chips.com, wxt@rock-chips.com, linux-arm-kernel@lists.infradead.org, huangtao@rock-chips.com, zyw@rock-chips.com, xxx@rock-chips.com, jay.xu@rock-chips.com, linux-rockchip@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v1 2/2] rockchip: power-domain: support qos save and restore References: <1458285444-31129-1-git-send-email-zhangqing@rock-chips.com> <6919893.LfaTZNRxZs@phil> <56FDDE09.9080601@rock-chips.com> <3121335.t9IV1BMS6Y@phil> In-Reply-To: <3121335.t9IV1BMS6Y@phil> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org hi, Heiko: Thanks for your replay. For your questions, I also have the same concerns. On 04/02/2016 12:19 AM, Heiko Stuebner wrote: > Hi Elaine, > > Am Freitag, 1. April 2016, 10:33:45 schrieb Elaine Zhang: >> I agree with most of your modifications. >> Except, the u32 *qos_save_regs below > > you're right. I didn't take that into account when my open-coding my idea. > A bit more below: > >> On 04/01/2016 12:31 AM, Heiko Stuebner wrote: >>> Hi Elaine, >>> >>> Am Freitag, 18. März 2016, 15:17:24 schrieb Elaine Zhang: >>>> support qos save and restore when power domain on/off. >>>> >>>> Signed-off-by: Elaine Zhang >>> >>> overall looks nice already ... some implementation-specific comments >>> below.> >>>> --- >>>> >>>> drivers/soc/rockchip/pm_domains.c | 87 >>>> >>>> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84 >>>> insertions(+), >>>> 3 deletions(-) >>>> >>>> diff --git a/drivers/soc/rockchip/pm_domains.c >>>> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644 >>>> --- a/drivers/soc/rockchip/pm_domains.c >>>> +++ b/drivers/soc/rockchip/pm_domains.c >>>> @@ -45,10 +45,21 @@ struct rockchip_pmu_info { >>>> >>>> const struct rockchip_domain_info *domain_info; >>>> >>>> }; >>>> >>>> +#define MAX_QOS_NODE_NUM 20 >>>> +#define MAX_QOS_REGS_NUM 5 >>>> +#define QOS_PRIORITY 0x08 >>>> +#define QOS_MODE 0x0c >>>> +#define QOS_BANDWIDTH 0x10 >>>> +#define QOS_SATURATION 0x14 >>>> +#define QOS_EXTCONTROL 0x18 >>>> + >>>> >>>> struct rockchip_pm_domain { >>>> >>>> struct generic_pm_domain genpd; >>>> const struct rockchip_domain_info *info; >>>> struct rockchip_pmu *pmu; >>>> >>>> + int num_qos; >>>> + struct regmap *qos_regmap[MAX_QOS_NODE_NUM]; >>>> + u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM]; >>> >>> struct regmap **qos_regmap; >>> u32 *qos_save_regs; >> >> when we save and restore qos registers we need save five regs for every >> qos. like this : >> for (i = 0; i < pd->num_qos; i++) { >> regmap_read(pd->qos_regmap[i], >> QOS_PRIORITY, >> &pd->qos_save_regs[i][0]); >> regmap_read(pd->qos_regmap[i], >> QOS_MODE, >> &pd->qos_save_regs[i][1]); >> regmap_read(pd->qos_regmap[i], >> QOS_BANDWIDTH, >> &pd->qos_save_regs[i][2]); >> regmap_read(pd->qos_regmap[i], >> QOS_SATURATION, >> &pd->qos_save_regs[i][3]); >> regmap_read(pd->qos_regmap[i], >> QOS_EXTCONTROL, >> &pd->qos_save_regs[i][4]); >> } >> so we can not define qos_save_regs like u32 *qos_save_regs;, >> and apply buff like >> pd->qos_save_regs = kcalloc(pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32), >> GFP_KERNEL); > > so how about simply swapping indices and doing it like > > u32 *qos_save_regs[MAX_QOS_REGS_NUM]; > > for (i = 0; i < MAX_QOS_REGS_NUM; i++) { > qos_save_regs[i] = kcalloc(pd->num_qos, sizeof(u32)); > /* error handling here */ > } > > ... > regmap_read(pd->qos_regmap[i], > QOS_SATURATION, > &pd->qos_save_regs[3][i]); > ... I agree with you on this modification. > > > Asked the other way around, how did you measure to set MAX_QOS_REGS_NUM to > 20? From looking at the rk3399 TRM, it seems there are only 38 QoS > generators on the SoC in general (24 on the rk3288 with PD_VIO having a > maximum of 9 qos generators), so preparing for 20 seems a bit overkill ;-) > About the MAX_QOS_NODE_NUM I also have some uncertaibty. Although there are only 38 QoS on the RK3399(24 on the rk3288),but not all of the pd need to power on/off.So not all QOS need save and restore. So about the MAX_QOS_NODE_NUM, what do you suggest. MAX_QOS_REGS_NUM is 5 because the QOS register is just 5 need save and restore. like : #define QOS_PRIORITY 0x08 #define QOS_MODE 0x0c #define QOS_BANDWIDTH 0x10 #define QOS_SATURATION 0x14 #define QOS_EXTCONTROL 0x18 > > Heiko > > >