From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933627AbbGUSlG (ORCPT ); Tue, 21 Jul 2015 14:41:06 -0400 Received: from mail-wi0-f175.google.com ([209.85.212.175]:36574 "EHLO mail-wi0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933274AbbGUSlD (ORCPT ); Tue, 21 Jul 2015 14:41:03 -0400 Message-ID: <55AE9235.4080809@linaro.org> Date: Tue, 21 Jul 2015 19:40:53 +0100 From: Srinivas Kandagatla User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Stephen Boyd CC: linux-arm-kernel@lists.infradead.org, Greg Kroah-Hartman , Rob Herring , Mark Brown , s.hauer@pengutronix.de, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, arnd@arndb.de, pantelis.antoniou@konsulko.com, mporter@konsulko.com, stefan.wahren@i2se.com, wxt@rock-chips.com, Maxime Ripard Subject: Re: [PATCH v8 1/9] nvmem: Add a simple NVMEM framework for nvmem providers References: <1437403352-4091-1-git-send-email-srinivas.kandagatla@linaro.org> <1437403392-4136-1-git-send-email-srinivas.kandagatla@linaro.org> <55AD6412.9070205@codeaurora.org> <55AE13D8.6020301@linaro.org> <55AE8864.6020608@codeaurora.org> In-Reply-To: <55AE8864.6020608@codeaurora.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 21/07/15 18:59, Stephen Boyd wrote: > On 07/21/2015 02:41 AM, Srinivas Kandagatla wrote: >> Thanks Stephen for review, >> >> On 20/07/15 22:11, Stephen Boyd wrote: >>> On 07/20/2015 07:43 AM, Srinivas Kandagatla wrote: >>>> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c >>>> new file mode 100644 >>>> index 0000000..bde5528 >>>> --- /dev/null >>>> +++ b/drivers/nvmem/core.c >>>> @@ -0,0 +1,384 @@ >>>> >>>> + >>>> +static int nvmem_add_cells(struct nvmem_device *nvmem, >>>> + const struct nvmem_config *cfg) >>>> +{ >>>> + struct nvmem_cell **cells; >>>> + const struct nvmem_cell_info *info = cfg->cells; >>>> + int i, rval; >>>> + >>>> + cells = kzalloc(sizeof(*cells) * cfg->ncells, GFP_KERNEL); >>> >>> kcalloc? >> >> Only reason for using kzalloc is to give the code more flexibility to >> free any pointer in the array in case of errors. > > Still lost. The arrays are allocated down below in the for loop. This is > allocating a bunch of pointers so using kcalloc() here avoids problems > with overflows causing kzalloc() to allocate fewer pointers than > requested. I'm not suggesting we replace the for loop with a kcalloc, > just this single line. > Yes we could replace the loop with kcalloc, but the problem is how can we handle freeing an element from that array? AFAIK we can only free the full array rather than each element if we allocate it via kcalloc, correct me if Am wrong? >> >>> >>>> + if (!cells) >>>> + return -ENOMEM; >>>> + >>>> + for (i = 0; i < cfg->ncells; i++) { >>>> + cells[i] = kzalloc(sizeof(**cells), GFP_KERNEL); >>>> + if (!cells[i]) { >>>> + rval = -ENOMEM; >>>> + goto err; >>>> + } >>>> + >