From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5AE917A305 for ; Mon, 4 May 2026 03:12:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.176 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777864354; cv=none; b=U2qbbaK47chfrM9o9VdzgH+iN18DOeVGEj0lWXAYLkZQrFMTc0GGc6w1XhB7kcGO9FTuRWIEDAIYDfWyNf3WLth/Th4MdFvxcHj6hHd8rdgGTK4MuS3mf1xPYM8t5eTx7rdujbP18Rn1vwI2G9IibLIEnkvgaVva7uOG30txAhw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777864354; c=relaxed/simple; bh=PzBoLGt+HSzA8hrxic0tw2YDmkzFCVSkTbJLdJVcrIs=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=isQ5LF+yM+IDubziviS6WJdX7w4k8sijaT2takez5yn0W9YZIcnSm06N1pxIoU1Af36yMJ2nIM4PAjwrwZZoz7s8smEef2XziVB/xosQt7hcDBUUydZnoUYiVsFcnufxLL6QUIJPUePWRoIN6wXszLk4nVb36Lci1eM+PvgQ3ig= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=i5yUS0/1; arc=none smtp.client-ip=209.85.160.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="i5yUS0/1" Received: by mail-qt1-f176.google.com with SMTP id d75a77b69052e-50d880e6fbbso43211491cf.0 for ; Sun, 03 May 2026 20:12:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1777864350; x=1778469150; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=Ix64vFLTMg1zfqNmmDku6bMK7XtzXkWd5VZgOslz+04=; b=i5yUS0/1h49THU8NwjqNdiORYWbIHKP1wPBLTadgua3Y5MitB1JeJ5yCnlF/4RyI58 czlECQ+ZCSPmjuMUx/q1IFvZTd5eQTF1r0Jjk4dlxCMOkh3/QL9UDWnfUs7cpf9qRX33 hsHRlOYkFkgElxDnPi7lqRhWRomsWfuHV45F/ZF29RloZsaWLf0X9cqf8fk3o/DtyJ70 GAj+ZntA5I2QZTDHkRVhOWoRFUhmX8k40T14HoM3ZXtsRPoJJ2Eg0uqmZ+J1hlOJUrLl EJMJP9xvSkGqa7JT6Sq2RquxOAeziL3bqbsuxCj2ca8uM7QRKWu0zxLeHjE5Cme2YWtw lu3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777864350; x=1778469150; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Ix64vFLTMg1zfqNmmDku6bMK7XtzXkWd5VZgOslz+04=; b=JAdvAkVJhOU4Fnub/TG4XXR7NEokkxVnEbGnJcTE7cGeQ8bf5HE4A01CEG7FZUm9tq KqLKd93IzwnxFL0iIVmJTV3vtwZHmdo6TPt8P+l0mZrITAmBJXGRFDx9EKR0QqXZR6BS XexQtV9NXcclil6WEPwY2YpzldItl6HrJ5TnVUpYIUJcJg39XgXoJyoNG+K6/0MDE9di lY8NCJGxbAs1UR2H2i/iieI7JVAovCGNc4VThj6cnDu4fP3g2NastfgC+yDKK52wiRpO z7gm3t9k37o0rGlpwqodFgHTOXwFouGQ/Kx1TXZM0DRLpvP+sjB5OcqOhA/sG6fz5FGM oIww== X-Forwarded-Encrypted: i=1; AFNElJ97PZ0+8a4eZ27v8O6cx3Japw7Ie7i3qF92a+7C23Yci7g10f9QcXjte3doqvn3saWTcBz7FcXqOgeX1eQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxyL0fvZ/xyvJmaTBQcisIN6D0mTgIOXEEeBwGmfTfT6TKedviE aq/e3fQRf3xtuJ4fyKtWIItfqnBbypGizqr+H1USXrcofy5cbetjG8iN X-Gm-Gg: AeBDiev76WLKSAA2iBUxB2U8XiLWMvPKW0v78u1Rjz3Wdvf4O9xOEY2WxYIn3xcY0Id NMP04w0Xe0aW2aPkER4iReTOj8aiSGF0eAnx5fIkE2l4jeN/zV+0f7DMVuBCNbTAGFOQh8S7faH 0MFz9NW/6voxdLIVn9h9/Ube2ouR1tLt+Fob+UH2NhSLC2IuaMq7Ow1H488ZQi2o6XTdz81TkW+ GzPml602nBo7/+UlVtSHDs7M/cFAuk+iySDbkwYxkKdAN+0+H4jnP0tKlzZGn+7pJujHtfHvF6+ e0XLGP6ZDXVmmHqgV52+2bMr3GDRrqOvPCKizyXcOUxC1m80qdvzoVbfXPwd+4LzaaJrnBdBQ4s CfZHYSl0+uCuz9U/fMpk5aZJ1CmiXSI02dkeYjjURRoX33Q9KZE1WtQRijqECMgnOeffpQxccuZ hll7gHw8VhB5jqf31aPCDsHMWk7dWlOYfB1uIaEtKAJ5nU/A+vyIH365r0f+8kyxjQt/FV0fz6+ blEkueAOPJNYglNa9ISyiB2ntMQjpQhC2Q1+jkuwiEpJw== X-Received: by 2002:a05:622a:15cf:b0:50f:c9a2:1643 with SMTP id d75a77b69052e-5104b484d42mr108220031cf.11.1777864349633; Sun, 03 May 2026 20:12:29 -0700 (PDT) Received: from ryzen ([2601:644:8000:5b5d:7285:c2ff:fe45:8a32]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-51040ba8acdsm86540591cf.31.2026.05.03.20.12.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 03 May 2026 20:12:28 -0700 (PDT) From: Rosen Penev To: dmaengine@vger.kernel.org Cc: Peter Ujfalusi , Vinod Koul , Frank Li , Nishanth Menon , Tero Kristo , Santosh Shilimkar , Kees Cook , "Gustavo A. R. Silva" , linux-kernel@vger.kernel.org (open list), linux-arm-kernel@lists.infradead.org (moderated list:TEXAS INSTRUMENTS' SYSTEM CONTROL INTERFACE (TI...), linux-hardening@vger.kernel.org (open list:KERNEL HARDENING (not covered by other areas):Keyword:\b__counted_by(_le|_be)?\b) Subject: [PATCHv2] firmware: ti_sci: simplify resource allocation Date: Sun, 3 May 2026 20:12:09 -0700 Message-ID: <20260504031209.618949-1-rosenp@gmail.com> X-Mailer: git-send-email 2.54.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Use a flexible array member to combine allocations. Add __counted_by for extra runtime analysis. Fixup k3-udma as well since ti_sci_resource is used there as well and needs fixing up to use kzalloc_flex. Signed-off-by: Rosen Penev --- v2: add k3-udma fixes. drivers/dma/ti/k3-udma.c | 180 +++++++++++++------------ drivers/firmware/ti_sci.c | 7 +- include/linux/soc/ti/ti_sci_protocol.h | 2 +- 3 files changed, 98 insertions(+), 91 deletions(-) diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index c964ebfcf3b6..ad6c50d0b844 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -4584,9 +4584,10 @@ static int udma_setup_resources(struct udma_dev *ud) { int ret, i, j; struct device *dev = ud->dev; - struct ti_sci_resource *rm_res, irq_res; + struct ti_sci_resource *rm_res, *irq_res; struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; u32 cap3; + u16 sets; /* Set up the throughput level start indexes */ cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c); @@ -4664,64 +4665,67 @@ static int udma_setup_resources(struct udma_dev *ud) rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; if (IS_ERR(rm_res)) { bitmap_zero(ud->tchan_map, ud->tchan_cnt); - irq_res.sets = 1; + sets = 1; } else { bitmap_fill(ud->tchan_map, ud->tchan_cnt); for (i = 0; i < rm_res->sets; i++) udma_mark_resource_ranges(ud, ud->tchan_map, &rm_res->desc[i], "tchan"); - irq_res.sets = rm_res->sets; + sets = rm_res->sets; } /* rchan and matching default flow ranges */ rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; if (IS_ERR(rm_res)) { bitmap_zero(ud->rchan_map, ud->rchan_cnt); - irq_res.sets++; + sets++; } else { bitmap_fill(ud->rchan_map, ud->rchan_cnt); for (i = 0; i < rm_res->sets; i++) udma_mark_resource_ranges(ud, ud->rchan_map, &rm_res->desc[i], "rchan"); - irq_res.sets += rm_res->sets; + sets += rm_res->sets; } - irq_res.desc = kzalloc_objs(*irq_res.desc, irq_res.sets); - if (!irq_res.desc) + irq_res = kzalloc_flex(*irq_res, desc, sets); + if (!irq_res) return -ENOMEM; + + irq_res->sets = sets; + rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; if (IS_ERR(rm_res)) { - irq_res.desc[0].start = 0; - irq_res.desc[0].num = ud->tchan_cnt; + irq_res->desc[0].start = 0; + irq_res->desc[0].num = ud->tchan_cnt; i = 1; } else { for (i = 0; i < rm_res->sets; i++) { - irq_res.desc[i].start = rm_res->desc[i].start; - irq_res.desc[i].num = rm_res->desc[i].num; - irq_res.desc[i].start_sec = rm_res->desc[i].start_sec; - irq_res.desc[i].num_sec = rm_res->desc[i].num_sec; + irq_res->desc[i].start = rm_res->desc[i].start; + irq_res->desc[i].num = rm_res->desc[i].num; + irq_res->desc[i].start_sec = rm_res->desc[i].start_sec; + irq_res->desc[i].num_sec = rm_res->desc[i].num_sec; } } rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; if (IS_ERR(rm_res)) { - irq_res.desc[i].start = 0; - irq_res.desc[i].num = ud->rchan_cnt; + irq_res->desc[i].start = 0; + irq_res->desc[i].num = ud->rchan_cnt; } else { for (j = 0; j < rm_res->sets; j++, i++) { if (rm_res->desc[j].num) { - irq_res.desc[i].start = rm_res->desc[j].start + + irq_res->desc[i].start = rm_res->desc[j].start + ud->soc_data->oes.udma_rchan; - irq_res.desc[i].num = rm_res->desc[j].num; + irq_res->desc[i].num = rm_res->desc[j].num; } if (rm_res->desc[j].num_sec) { - irq_res.desc[i].start_sec = rm_res->desc[j].start_sec + + irq_res->desc[i].start_sec = rm_res->desc[j].start_sec + ud->soc_data->oes.udma_rchan; - irq_res.desc[i].num_sec = rm_res->desc[j].num_sec; + irq_res->desc[i].num_sec = rm_res->desc[j].num_sec; } } } - ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); - kfree(irq_res.desc); + ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, irq_res); + kfree(irq_res); if (ret) { dev_err(ud->dev, "Failed to allocate MSI interrupts\n"); return ret; @@ -4746,9 +4750,10 @@ static int bcdma_setup_resources(struct udma_dev *ud) { int ret, i, j; struct device *dev = ud->dev; - struct ti_sci_resource *rm_res, irq_res; + struct ti_sci_resource *rm_res, *irq_res; struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; const struct udma_oes_offsets *oes = &ud->soc_data->oes; + u16 sets; u32 cap; /* Set up the throughput level start indexes */ @@ -4828,21 +4833,21 @@ static int bcdma_setup_resources(struct udma_dev *ud) (char *)range_names[i]); } - irq_res.sets = 0; + sets = 0; /* bchan ranges */ if (ud->bchan_cnt) { rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN]; if (IS_ERR(rm_res)) { bitmap_zero(ud->bchan_map, ud->bchan_cnt); - irq_res.sets++; + sets++; } else { bitmap_fill(ud->bchan_map, ud->bchan_cnt); for (i = 0; i < rm_res->sets; i++) udma_mark_resource_ranges(ud, ud->bchan_map, &rm_res->desc[i], "bchan"); - irq_res.sets += rm_res->sets; + sets += rm_res->sets; } } @@ -4851,14 +4856,14 @@ static int bcdma_setup_resources(struct udma_dev *ud) rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; if (IS_ERR(rm_res)) { bitmap_zero(ud->tchan_map, ud->tchan_cnt); - irq_res.sets += 2; + sets += 2; } else { bitmap_fill(ud->tchan_map, ud->tchan_cnt); for (i = 0; i < rm_res->sets; i++) udma_mark_resource_ranges(ud, ud->tchan_map, &rm_res->desc[i], "tchan"); - irq_res.sets += rm_res->sets * 2; + sets += rm_res->sets * 2; } } @@ -4867,36 +4872,39 @@ static int bcdma_setup_resources(struct udma_dev *ud) rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; if (IS_ERR(rm_res)) { bitmap_zero(ud->rchan_map, ud->rchan_cnt); - irq_res.sets += 2; + sets += 2; } else { bitmap_fill(ud->rchan_map, ud->rchan_cnt); for (i = 0; i < rm_res->sets; i++) udma_mark_resource_ranges(ud, ud->rchan_map, &rm_res->desc[i], "rchan"); - irq_res.sets += rm_res->sets * 2; + sets += rm_res->sets * 2; } } - irq_res.desc = kzalloc_objs(*irq_res.desc, irq_res.sets); - if (!irq_res.desc) + irq_res = kzalloc_flex(*irq_res, desc, sets); + if (!irq_res) return -ENOMEM; + + irq_res->sets = sets; + if (ud->bchan_cnt) { rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN]; if (IS_ERR(rm_res)) { - irq_res.desc[0].start = oes->bcdma_bchan_ring; - irq_res.desc[0].num = ud->bchan_cnt; + irq_res->desc[0].start = oes->bcdma_bchan_ring; + irq_res->desc[0].num = ud->bchan_cnt; i = 1; } else { for (i = 0; i < rm_res->sets; i++) { - irq_res.desc[i].start = rm_res->desc[i].start + + irq_res->desc[i].start = rm_res->desc[i].start + oes->bcdma_bchan_ring; - irq_res.desc[i].num = rm_res->desc[i].num; + irq_res->desc[i].num = rm_res->desc[i].num; if (rm_res->desc[i].num_sec) { - irq_res.desc[i].start_sec = rm_res->desc[i].start_sec + + irq_res->desc[i].start_sec = rm_res->desc[i].start_sec + oes->bcdma_bchan_ring; - irq_res.desc[i].num_sec = rm_res->desc[i].num_sec; + irq_res->desc[i].num_sec = rm_res->desc[i].num_sec; } } } @@ -4907,28 +4915,28 @@ static int bcdma_setup_resources(struct udma_dev *ud) if (ud->tchan_cnt) { rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN]; if (IS_ERR(rm_res)) { - irq_res.desc[i].start = oes->bcdma_tchan_data; - irq_res.desc[i].num = ud->tchan_cnt; - irq_res.desc[i + 1].start = oes->bcdma_tchan_ring; - irq_res.desc[i + 1].num = ud->tchan_cnt; + irq_res->desc[i].start = oes->bcdma_tchan_data; + irq_res->desc[i].num = ud->tchan_cnt; + irq_res->desc[i + 1].start = oes->bcdma_tchan_ring; + irq_res->desc[i + 1].num = ud->tchan_cnt; i += 2; } else { for (j = 0; j < rm_res->sets; j++, i += 2) { - irq_res.desc[i].start = rm_res->desc[j].start + + irq_res->desc[i].start = rm_res->desc[j].start + oes->bcdma_tchan_data; - irq_res.desc[i].num = rm_res->desc[j].num; + irq_res->desc[i].num = rm_res->desc[j].num; - irq_res.desc[i + 1].start = rm_res->desc[j].start + + irq_res->desc[i + 1].start = rm_res->desc[j].start + oes->bcdma_tchan_ring; - irq_res.desc[i + 1].num = rm_res->desc[j].num; + irq_res->desc[i + 1].num = rm_res->desc[j].num; if (rm_res->desc[j].num_sec) { - irq_res.desc[i].start_sec = rm_res->desc[j].start_sec + + irq_res->desc[i].start_sec = rm_res->desc[j].start_sec + oes->bcdma_tchan_data; - irq_res.desc[i].num_sec = rm_res->desc[j].num_sec; - irq_res.desc[i + 1].start_sec = rm_res->desc[j].start_sec + + irq_res->desc[i].num_sec = rm_res->desc[j].num_sec; + irq_res->desc[i + 1].start_sec = rm_res->desc[j].start_sec + oes->bcdma_tchan_ring; - irq_res.desc[i + 1].num_sec = rm_res->desc[j].num_sec; + irq_res->desc[i + 1].num_sec = rm_res->desc[j].num_sec; } } } @@ -4936,35 +4944,35 @@ static int bcdma_setup_resources(struct udma_dev *ud) if (ud->rchan_cnt) { rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN]; if (IS_ERR(rm_res)) { - irq_res.desc[i].start = oes->bcdma_rchan_data; - irq_res.desc[i].num = ud->rchan_cnt; - irq_res.desc[i + 1].start = oes->bcdma_rchan_ring; - irq_res.desc[i + 1].num = ud->rchan_cnt; + irq_res->desc[i].start = oes->bcdma_rchan_data; + irq_res->desc[i].num = ud->rchan_cnt; + irq_res->desc[i + 1].start = oes->bcdma_rchan_ring; + irq_res->desc[i + 1].num = ud->rchan_cnt; i += 2; } else { for (j = 0; j < rm_res->sets; j++, i += 2) { - irq_res.desc[i].start = rm_res->desc[j].start + + irq_res->desc[i].start = rm_res->desc[j].start + oes->bcdma_rchan_data; - irq_res.desc[i].num = rm_res->desc[j].num; + irq_res->desc[i].num = rm_res->desc[j].num; - irq_res.desc[i + 1].start = rm_res->desc[j].start + + irq_res->desc[i + 1].start = rm_res->desc[j].start + oes->bcdma_rchan_ring; - irq_res.desc[i + 1].num = rm_res->desc[j].num; + irq_res->desc[i + 1].num = rm_res->desc[j].num; if (rm_res->desc[j].num_sec) { - irq_res.desc[i].start_sec = rm_res->desc[j].start_sec + + irq_res->desc[i].start_sec = rm_res->desc[j].start_sec + oes->bcdma_rchan_data; - irq_res.desc[i].num_sec = rm_res->desc[j].num_sec; - irq_res.desc[i + 1].start_sec = rm_res->desc[j].start_sec + + irq_res->desc[i].num_sec = rm_res->desc[j].num_sec; + irq_res->desc[i + 1].start_sec = rm_res->desc[j].start_sec + oes->bcdma_rchan_ring; - irq_res.desc[i + 1].num_sec = rm_res->desc[j].num_sec; + irq_res->desc[i + 1].num_sec = rm_res->desc[j].num_sec; } } } } - ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); - kfree(irq_res.desc); + ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, irq_res); + kfree(irq_res); if (ret) { dev_err(ud->dev, "Failed to allocate MSI interrupts\n"); return ret; @@ -4977,10 +4985,11 @@ static int pktdma_setup_resources(struct udma_dev *ud) { int ret, i, j; struct device *dev = ud->dev; - struct ti_sci_resource *rm_res, irq_res; + struct ti_sci_resource *rm_res, *irq_res; struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; const struct udma_oes_offsets *oes = &ud->soc_data->oes; u32 cap3; + u16 sets; /* Set up the throughput level start indexes */ cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c); @@ -5057,13 +5066,13 @@ static int pktdma_setup_resources(struct udma_dev *ud) if (IS_ERR(rm_res)) { /* all rflows are assigned exclusively to Linux */ bitmap_zero(ud->rflow_in_use, ud->rflow_cnt); - irq_res.sets = 1; + sets = 1; } else { bitmap_fill(ud->rflow_in_use, ud->rflow_cnt); for (i = 0; i < rm_res->sets; i++) udma_mark_resource_ranges(ud, ud->rflow_in_use, &rm_res->desc[i], "rflow"); - irq_res.sets = rm_res->sets; + sets = rm_res->sets; } /* tflow ranges */ @@ -5071,55 +5080,58 @@ static int pktdma_setup_resources(struct udma_dev *ud) if (IS_ERR(rm_res)) { /* all tflows are assigned exclusively to Linux */ bitmap_zero(ud->tflow_map, ud->tflow_cnt); - irq_res.sets++; + sets++; } else { bitmap_fill(ud->tflow_map, ud->tflow_cnt); for (i = 0; i < rm_res->sets; i++) udma_mark_resource_ranges(ud, ud->tflow_map, &rm_res->desc[i], "tflow"); - irq_res.sets += rm_res->sets; + sets += rm_res->sets; } - irq_res.desc = kzalloc_objs(*irq_res.desc, irq_res.sets); - if (!irq_res.desc) + irq_res = kzalloc_flex(*irq_res, desc, sets); + if (!irq_res) return -ENOMEM; + + irq_res->sets = sets; + rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW]; if (IS_ERR(rm_res)) { - irq_res.desc[0].start = oes->pktdma_tchan_flow; - irq_res.desc[0].num = ud->tflow_cnt; + irq_res->desc[0].start = oes->pktdma_tchan_flow; + irq_res->desc[0].num = ud->tflow_cnt; i = 1; } else { for (i = 0; i < rm_res->sets; i++) { - irq_res.desc[i].start = rm_res->desc[i].start + + irq_res->desc[i].start = rm_res->desc[i].start + oes->pktdma_tchan_flow; - irq_res.desc[i].num = rm_res->desc[i].num; + irq_res->desc[i].num = rm_res->desc[i].num; if (rm_res->desc[i].num_sec) { - irq_res.desc[i].start_sec = rm_res->desc[i].start_sec + + irq_res->desc[i].start_sec = rm_res->desc[i].start_sec + oes->pktdma_tchan_flow; - irq_res.desc[i].num_sec = rm_res->desc[i].num_sec; + irq_res->desc[i].num_sec = rm_res->desc[i].num_sec; } } } rm_res = tisci_rm->rm_ranges[RM_RANGE_RFLOW]; if (IS_ERR(rm_res)) { - irq_res.desc[i].start = oes->pktdma_rchan_flow; - irq_res.desc[i].num = ud->rflow_cnt; + irq_res->desc[i].start = oes->pktdma_rchan_flow; + irq_res->desc[i].num = ud->rflow_cnt; } else { for (j = 0; j < rm_res->sets; j++, i++) { - irq_res.desc[i].start = rm_res->desc[j].start + + irq_res->desc[i].start = rm_res->desc[j].start + oes->pktdma_rchan_flow; - irq_res.desc[i].num = rm_res->desc[j].num; + irq_res->desc[i].num = rm_res->desc[j].num; if (rm_res->desc[j].num_sec) { - irq_res.desc[i].start_sec = rm_res->desc[j].start_sec + + irq_res->desc[i].start_sec = rm_res->desc[j].start_sec + oes->pktdma_rchan_flow; - irq_res.desc[i].num_sec = rm_res->desc[j].num_sec; + irq_res->desc[i].num_sec = rm_res->desc[j].num_sec; } } } - ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res); - kfree(irq_res.desc); + ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, irq_res); + kfree(irq_res); if (ret) { dev_err(ud->dev, "Failed to allocate MSI interrupts\n"); return ret; diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c index e027a2bd8f26..04d99c1fafa1 100644 --- a/drivers/firmware/ti_sci.c +++ b/drivers/firmware/ti_sci.c @@ -3574,16 +3574,11 @@ devm_ti_sci_get_resource_sets(const struct ti_sci_handle *handle, bool valid_set = false; int i, ret, res_count; - res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL); + res = devm_kzalloc(dev, struct_size(res, desc, sets), GFP_KERNEL); if (!res) return ERR_PTR(-ENOMEM); res->sets = sets; - res->desc = devm_kcalloc(dev, res->sets, sizeof(*res->desc), - GFP_KERNEL); - if (!res->desc) - return ERR_PTR(-ENOMEM); - for (i = 0; i < res->sets; i++) { ret = handle->ops.rm_core_ops.get_range(handle, dev_id, sub_types[i], diff --git a/include/linux/soc/ti/ti_sci_protocol.h b/include/linux/soc/ti/ti_sci_protocol.h index fd104b666836..7632bb11c862 100644 --- a/include/linux/soc/ti/ti_sci_protocol.h +++ b/include/linux/soc/ti/ti_sci_protocol.h @@ -599,7 +599,7 @@ struct ti_sci_handle { struct ti_sci_resource { u16 sets; raw_spinlock_t lock; - struct ti_sci_resource_desc *desc; + struct ti_sci_resource_desc desc[] __counted_by(sets); }; #if IS_ENABLED(CONFIG_TI_SCI_PROTOCOL) -- 2.54.0